CS236781: Deep Learning on Computational Accelerators¶

Homework Assignment 2¶

Faculty of Computer Science, Technion.

Submitted by:

# Name Id email
Student 1 Dan Sdeor 209509181 dansdeor@campus.technion.ac.il
Student 2 Levi Hurvitz 313511602 levihorvitz@campus.technion.ac.il

Introduction¶

In this assignment we'll create a from-scratch implementation of two fundemental deep learning concepts: the backpropagation algorithm and stochastic gradient descent-based optimizers. In addition, you will create a general-purpose multilayer perceptron, the core building block of deep neural networks. We'll visualize decision bounrdaries and ROC curves in the context of binary classification. Following that we will focus on convolutional networks with residual blocks. We'll create our own network architectures and train them using GPUs on the course servers, then we'll conduct architecture experiments to determine the the effects of different architectural decisions on the performance of deep networks.

General Guidelines¶

  • Please read the getting started page on the course website. It explains how to setup, run and submit the assignment.
  • Please read the course servers usage guide. It explains how to use and run your code on the course servers to benefit from training with GPUs.
  • The text and code cells in these notebooks are intended to guide you through the assignment and help you verify your solutions. The notebooks do not need to be edited at all (unless you wish to play around). The only exception is to fill your name(s) in the above cell before submission. Please do not remove sections or change the order of any cells.
  • All your code (and even answers to questions) should be written in the files within the python package corresponding the assignment number (hw1, hw2, etc). You can of course use any editor or IDE to work on these files.

Contents¶

  • Part 1: Backpropagation
  • Part 2: Optimization and Training:
  • Part 3: Binary Classification with Multilayer Perceptrons
  • Part 4: Convolutional Neural Networks:
  • Part 5: Convolutional Architecture Experiments
  • Part 6: YOLO - Object Detection
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 1: Backpropagation¶

In this part, we'll implement backpropagation and automatic differentiation from scratch and compare our implementations to PyTorch's built in implementation (autograd).

In [1]:
import torch
import unittest

%load_ext autoreload
%autoreload 2

test = unittest.TestCase()

Reminder: The backpropagation algorithm is at the core of training deep models. To state the problem we'll tackle in this notebook, imagine we have an L-layer MLP model, defined as $$ \hat{\vec{y}^i} = \vec{y}L^i= \varphi_L \left( \mat{W}_L \varphi{L-1} \left( \cdots \varphi_1 \left( \mat{W}_1 \vec{x}^i + \vec{b}_1 \right) \cdots \right)

  • \vec{b}_L \right),
$$ a pointwise loss function $\ell(\vec{y}, \hat{\vec{y}})$ and an empirical loss over our entire data set, $$

L(\vec{\theta}) = \frac{1}{N} \sum_{i=1}^{N} \ell(\vec{y}^i, \hat{\vec{y}^i}) + R(\vec{\theta}) $$

where $\vec{\theta}$ is a vector containing all network parameters, e.g. $\vec{\theta} = \left[ \mat{W}_{1,:}, \vec{b}_1, \dots, \mat{W}_{L,:}, \vec{b}_L \right]$.

In order to train our model we would like to calculate the derivative (or gradient, in the multivariate case) of the loss with respect to each and every one of the parameters, i.e. $\pderiv{L}{\mat{W}_j}$ and $\pderiv{L}{\vec{b}_j}$ for all $j$. Since the gradient "points" to the direction of functional increase, the negative gradient is often used as a descent direction for descent-based optimization algorithms. In other words, iteratively updating each parameter proportianally to it's negetive gradient can lead to convergence to a local minimum of the loss function.

Calculus tells us that as long as we know the derivatives of all the functions "along the way" ($\varphi_i(\cdot),\ \ell(\cdot,\cdot),\ R(\cdot)$) we can use the chain rule to calculate the derivative of the loss with respect to any one of the parameter vectors. Note that if the loss $L(\vec{\theta})$ is scalar (which is usually the case), the gradient of a parameter will have the same shape as the parameter itself (matrix/vector/tensor of same dimensions).

For deep models that are a composition of many functions, calculating the gradient of each parameter by hand and implementing hard-coded gradient derivations quickly becomes infeasible. Additionally, such code makes models hard to change, since any change potentially requires re-derivation and re-implementation of the entire gradient function.

The backpropagation algorithm, which we saw in the lecture, provides us with a effective method of applying the chain rule recursively so that we can implement gradient calculations of arbitrarily deep or complex models.

We'll now implement backpropagation using a modular approach, which will allow us to chain many components layers together and get automatic gradient calculation of the output with respect to the input or any intermediate parameter.

To do this, we'll define a Layer class. Here's the API of this class:

In [2]:
import hw2.layers as layers
help(layers.Layer)
Help on class Layer in module hw2.layers:

class Layer(abc.ABC)
 |  A Layer is some computation element in a network architecture which
 |  supports automatic differentiation using forward and backward functions.
 |  
 |  Method resolution order:
 |      Layer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __call__(self, *args, **kwargs)
 |      Call self as a function.
 |  
 |  __init__(self)
 |      Initialize self.  See help(type(self)) for accurate signature.
 |  
 |  __repr__(self)
 |      Return repr(self).
 |  
 |  backward(self, dout)
 |      Computes the backward pass of the layer, i.e. the gradient
 |      calculation of the final network output with respect to each of the
 |      parameters of the forward function.
 |      :param dout: The gradient of the network with respect to the
 |      output of this layer.
 |      :return: A tuple with the same number of elements as the parameters of
 |      the forward function. Each element will be the gradient of the
 |      network output with respect to that parameter.
 |  
 |  forward(self, *args, **kwargs)
 |      Computes the forward pass of the layer.
 |      :param args: The computation arguments (implementation specific).
 |      :return: The result of the computation.
 |  
 |  params(self)
 |      :return: Layer's trainable parameters and their gradients as a list
 |      of tuples, each tuple containing a tensor and it's corresponding
 |      gradient tensor.
 |  
 |  train(self, training_mode=True)
 |      Changes the mode of this layer between training and evaluation (test)
 |      mode. Some layers have different behaviour depending on mode.
 |      :param training_mode: True: set the model in training mode. False: set
 |      evaluation mode.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'backward', 'forward', 'params'})

In other words, a Layer can be anything: a layer, an activation function, a loss function or generally any computation that we know how to derive a gradient for.

Each Layer must define a forward() function and a backward() function.

  • The forward() function performs the actual calculation/operation of the block and returns an output.
  • The backward() function computes the gradient of the input and parameters as a function of the gradient of the output, according to the chain rule.

Here's a diagram illustrating the above explanation:

Note that the diagram doesn't show that if the function is parametrized, i.e. $f(\vec{x},\vec{y})=f(\vec{x},\vec{y};\vec{w})$, there are also gradients to calculate for the parameters $\vec{w}$.

The forward pass is straightforward: just do the computation. To understand the backward pass, imagine that there's some "downstream" loss function $L(\vec{\theta})$ and magically somehow we are told the gradient of that loss with respect to the output $\vec{z}$ of our block, i.e. $\pderiv{L}{\vec{z}}$.

Now, since we know how to calculate the derivative of $f(\vec{x},\vec{y};\vec{w})$, it means we know how to calculate $\pderiv{\vec{z}}{\vec{x}}$, $\pderiv{\vec{z}}{\vec{y}}$ and $\pderiv{\vec{z}}{\vec{w}}$ . Thanks to the chain rule, this is all we need to calculate the gradients of the loss w.r.t. the input and parameters:

$$ \begin{align} \pderiv{L}{\vec{x}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{x}}\\ \pderiv{L}{\vec{y}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{y}}\\ \pderiv{L}{\vec{w}} &= \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}} \end{align} $$

Comparison with PyTorch¶

PyTorch has the nn.Module base class, which may seem to be similar to our Layer since it also represents a computation element in a network. However PyTorch's nn.Modules don't compute the gradient directly, they only define the forward calculations. Instead, PyTorch has a more low-level API for defining a function and explicitly implementing it's forward() and backward(). See autograd.Function. When an operation is performed on a tensor, it creates a Function instance which performs the operation and stores any necessary information for calculating the gradient later on. Additionally, Functionss point to the other Function objects representing the operations performed earlier on the tensor. Thus, a graph (or DAG) of operations is created (this is not 100% exact, as the graph is actually composed of a different type of class which wraps the backward method, but it's accurate enough for our purposes).

A Tensor instance which was created by performing operations on one or more tensors with requires_grad=True, has a grad_fn property which is a Function instance representing the last operation performed to produce this tensor. This exposes the graph of Function instances, each with it's own backward() function. Therefore, in PyTorch the backward() function is called on the tensors, not the modules.

Our Layers are therefore a combination of the ideas in Module and Function and we'll implement them together, just to make things simpler. Our goal here is to create a "poor man's autograd": We'll use PyTorch tensors, but we'll calculate and store the gradients in our Layers (or return them). The gradients we'll calculate are of the entire block, not individual operations on tensors.

To test our implementation, we'll use PyTorch's autograd.

Note that of course this method of tracking gradients is much more limited than what PyTorch offers. However it allows us to implement the backpropagation algorithm very simply and really see how it works.

Let's set up some testing instrumentation:

In [3]:
from hw2.grad_compare import compare_layer_to_torch

def test_block_grad(block: layers.Layer, x, y=None, delta=1e-3):
    diffs = compare_layer_to_torch(block, x, y)
    
    # Assert diff values
    for diff in diffs:
        test.assertLess(diff, delta)

# Show the compare function
compare_layer_to_torch??

Notes:

  • After you complete your implementation, you should make sure to read and understand the compare_layer_to_torch() function. It will help you understand what PyTorch is doing.
  • The value of delta above is should not be needed. A correct implementation will give you a diff of exactly zero.

Layer Implementations¶

We'll now implement some Layers that will enable us to later build an MLP model of arbitrary depth, complete with automatic differentiation.

For each block, you'll first implement the forward() function. Then, you will calculate the derivative of the block by hand with respect to each of its input tensors and each of its parameter tensors (if any). Using your manually-calculated derivation, you can then implement the backward() function.

Notice that we have intermediate Jacobians that are potentially high dimensional tensors. For example in the expression $\pderiv{L}{\vec{w}} = \pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$, the term $\pderiv{\vec{z}}{\vec{w}}$ is a 4D Jacobian if both $\vec{z}$ and $\vec{w}$ are 2D matrices.

In order to implement the backpropagation algorithm efficiently, we need to implement every backward function without explicitly constructing this Jacobian. Instead, we're interested in directly calculating the vector-Jacobian product (VJP) $\pderiv{L}{\vec{z}}\cdot \pderiv{\vec{z}}{\vec{w}}$. In order to do this, you should try to figure out the gradient of the loss with respect to one element, e.g. $\pderiv{L}{\vec{w}_{1,1}}$ and extrapolate from there how to directly obtain the VJP.

Activation functions¶

(Leaky) ReLU¶

ReLU, or rectified linear unit is a very common activation function in deep learning architectures. In it's most standard form, as we'll implement here, it has no parameters.

We'll first implement the "leaky" version, defined as

$$ \mathrm{relu}(\vec{x}) = \max(\alpha\vec{x},\vec{x}), \ 0\leq\alpha<1 $$

This is similar to the ReLU activation we've seen in class, only that it has a small non-zero slope then it's input is negative. Note that it's not strictly differentiable, however it has sub-gradients, defined separately any positive-valued input and for negative-valued input.

TODO: Complete the implementation of the LeakyReLU class in the hw2/layers.py module.

In [4]:
N = 100
in_features = 200
num_classes = 10
eps = 1e-6
In [5]:
# Test LeakyReLU
alpha = 0.1
lrelu = layers.LeakyReLU(alpha=alpha)
x_test = torch.randn(N, in_features)

# Test forward pass
z = lrelu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.nn.LeakyReLU(alpha)(x_test), atol=eps))

# Test backward pass
test_block_grad(lrelu, x_test)
Comparing gradients... 
input    diff=0.000

Now using the LeakyReLU, we can trivially define a regular ReLU block as a special case.

TODO: Complete the implementation of the ReLU class in the hw2/layers.py module.

In [6]:
# Test ReLU
relu = layers.ReLU()
x_test = torch.randn(N, in_features)

# Test forward pass
z = relu(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.relu(x_test), atol=eps))

# Test backward pass
test_block_grad(relu, x_test)
Comparing gradients... 
input    diff=0.000

Sigmoid¶

The sigmoid function $\sigma(x)$ is also sometimes used as an activation function. We have also seen it previously in the context of logistic regression.

The sigmoid function is defined as

$$ \sigma(\vec{x}) = \frac{1}{1+\exp(-\vec{x})}. $$
In [7]:
# Test Sigmoid
sigmoid = layers.Sigmoid()
x_test = torch.randn(N, in_features, in_features) # 3D input should work

# Test forward pass
z = sigmoid(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.sigmoid(x_test), atol=eps))

# Test backward pass
test_block_grad(sigmoid, x_test)
Comparing gradients... 
input    diff=0.000

Hyperbolic Tangent¶

The hyperbolic tangent function $\tanh(x)$ is a common activation function used when the output should be in the range [-1, 1].

The tanh function is defined as

$$ \tanh(\vec{x}) = \frac{\exp(x)-\exp(-x)}{\exp(x)+\exp(-\vec{x})}. $$
In [8]:
# Test TanH
tanh = layers.TanH()
x_test = torch.randn(N, in_features, in_features) # 3D input should work

# Test forward pass
z = tanh(x_test)
test.assertSequenceEqual(z.shape, x_test.shape)
test.assertTrue(torch.allclose(z, torch.tanh(x_test), atol=eps))

# Test backward pass
test_block_grad(tanh, x_test)
Comparing gradients... 
input    diff=0.000

Linear (fully connected) layer¶

First, we'll implement an affine transform layer, also known as a fully connected layer.

Given an input $\mat{X}$ the layer computes,

$$ \mat{Z} = \mat{X} \mattr{W} + \vec{b} ,~ \mat{X}\in\set{R}^{N\times D_{\mathrm{in}}},~ \mat{W}\in\set{R}^{D_{\mathrm{out}}\times D_{\mathrm{in}}},~ \vec{b}\in\set{R}^{D_{\mathrm{out}}}. $$

Notes:

  • We write it this way to follow the implementation conventions.
  • $N$ is the number of samples in the input (batch size). The input $\mat{X}$ will always be a tensor containing a batch dimension first.
  • Thanks to broadcasting, $\vec{b}$ can remain a vector even though the input $\mat{X}$ is a matrix.

TODO: Complete the implementation of the Linear class in the hw2/layers.py module.

In [9]:
# Test Linear
out_features = 1000
fc = layers.Linear(in_features, out_features)
x_test = torch.randn(N, in_features)

# Test forward pass
z = fc(x_test)
test.assertSequenceEqual(z.shape, [N, out_features])
torch_fc = torch.nn.Linear(in_features, out_features,bias=True)
torch_fc.weight = torch.nn.Parameter(fc.w)
torch_fc.bias = torch.nn.Parameter(fc.b)
test.assertTrue(torch.allclose(torch_fc(x_test), z, atol=eps))

# Test backward pass
test_block_grad(fc, x_test)

# Test second backward pass
x_test = torch.randn(N, in_features)
z = fc(x_test)
z = fc(x_test)
test_block_grad(fc, x_test)
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000

Cross-Entropy Loss¶

As you know by know, cross-entropy is a common loss function for classification tasks. In class, we defined it as

$$\ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) = - {\vectr{y}} \log(\hat{\vec{y}})$$

where $\hat{\vec{y}} = \mathrm{softmax}(x)$ is a probability vector (the output of softmax on the class scores $\vec{x}$) and the vector $\vec{y}$ is a 1-hot encoded class label.

However, it's tricky to compute the gradient of softmax, so instead we'll define a version of cross-entropy that produces the exact same output but works directly on the class scores $\vec{x}$.

We can write, $$\begin{align} \ell_{\mathrm{CE}}(\vec{y},\hat{\vec{y}}) &= - {\vectr{y}} \log(\hat{\vec{y}}) = - {\vectr{y}} \log\left(\mathrm{softmax}(\vec{x})\right) \\ &= - {\vectr{y}} \log\left(\frac{e^{\vec{x}}}{\sum_k e^{x_k}}\right) \\ &= - \log\left(\frac{e^{x_y}}{\sum_k e^{x_k}}\right) \\ &= - \left(\log\left(e^{x_y}\right) - \log\left(\sum_k e^{x_k}\right)\right)\\ &= - x_y + \log\left(\sum_k e^{x_k}\right) \end{align}$$

Where the scalar $y$ is the correct class label, so $x_y$ is the correct class score.

Note that this version of cross entropy is also what's provided by PyTorch's nn module.

TODO: Complete the implementation of the CrossEntropyLoss class in the hw2/layers.py module.

In [10]:
# Test CrossEntropy
cross_entropy = layers.CrossEntropyLoss()
scores = torch.randn(N, num_classes)
labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)

# Test forward pass
loss = cross_entropy(scores, labels)
expected_loss = torch.nn.functional.cross_entropy(scores, labels)
test.assertLess(torch.abs(expected_loss-loss).item(), 1e-5)
print('loss=', loss.item())

# Test backward pass
test_block_grad(cross_entropy, scores, y=labels)
loss= 2.7283618450164795
Comparing gradients... 
input    diff=0.000

Building Models¶

Now that we have some working Layers, we can build an MLP model of arbitrary depth and compute end-to-end gradients.

First, lets copy an idea from PyTorch and implement our own version of the nn.Sequential Module. This is a Layer which contains other Layers and calls them in sequence. We'll use this to build our MLP model.

TODO: Complete the implementation of the Sequential class in the hw2/layers.py module.

In [11]:
# Test Sequential
# Let's create a long sequence of layers and see
# whether we can compute end-to-end gradients of the whole thing.

seq = layers.Sequential(
    layers.Linear(in_features, 100),
    layers.Linear(100, 200),
    layers.Linear(200, 100),
    layers.ReLU(),
    layers.Linear(100, 500),
    layers.LeakyReLU(alpha=0.01),
    layers.Linear(500, 200),
    layers.ReLU(),
    layers.Linear(200, 500),
    layers.LeakyReLU(alpha=0.1),
    layers.Linear(500, 1),
    layers.Sigmoid(),
)
x_test = torch.randn(N, in_features)

# Test forward pass
z = seq(x_test)
test.assertSequenceEqual(z.shape, [N, 1])

# Test backward pass
test_block_grad(seq, x_test)
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
param#09 diff=0.000
param#10 diff=0.000
param#11 diff=0.000
param#12 diff=0.000
param#13 diff=0.000
param#14 diff=0.000

Now, equipped with a Sequential, all we have to do is create an MLP architecture. We'll define our MLP with the following hyperparameters:

  • Number of input features, $D$.
  • Number of output classes, $C$.
  • Sizes of hidden layers, $h_1,\dots,h_L$.

So the architecture will be:

FC($D$, $h_1$) $\rightarrow$ ReLU $\rightarrow$ FC($h_1$, $h_2$) $\rightarrow$ ReLU $\rightarrow$ $\cdots$ $\rightarrow$ FC($h_{L-1}$, $h_L$) $\rightarrow$ ReLU $\rightarrow$ FC($h_{L}$, $C$)

We'll also create a sequence of the above MLP and a cross-entropy loss, since it's the gradient of the loss that we need in order to train a model.

TODO: Complete the implementation of the MLP class in the hw2/layers.py module. Ignore the dropout parameter for now.

In [12]:
# Create an MLP model
mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100])
print(mlp)
MLP, Sequential
	[0] Linear(self.in_features=200, self.out_features=100)
	[1] ReLU
	[2] Linear(self.in_features=100, self.out_features=50)
	[3] ReLU
	[4] Linear(self.in_features=50, self.out_features=100)
	[5] ReLU
	[6] Linear(self.in_features=100, self.out_features=10)

In [13]:
# Test MLP architecture
N = 100
in_features = 10
num_classes = 10
for activation in ('relu', 'sigmoid'):
    mlp = layers.MLP(in_features, num_classes, hidden_features=[100, 50, 100], activation=activation)
    test.assertEqual(len(mlp.sequence), 7)
    
    num_linear = 0
    for b1, b2 in zip(mlp.sequence, mlp.sequence[1:]):
        if (str(b2).lower() == activation):
            test.assertTrue(str(b1).startswith('Linear'))
            num_linear += 1
            
    test.assertTrue(str(mlp.sequence[-1]).startswith('Linear'))
    test.assertEqual(num_linear, 3)

    # Test MLP gradients
    # Test forward pass
    x_test = torch.randn(N, in_features)
    labels = torch.randint(low=0, high=num_classes, size=(N,), dtype=torch.long)
    z = mlp(x_test)
    test.assertSequenceEqual(z.shape, [N, num_classes])

    # Create a sequence of MLPs and CE loss
    seq_mlp = layers.Sequential(mlp, layers.CrossEntropyLoss())
    loss = seq_mlp(x_test, y=labels)
    test.assertEqual(loss.dim(), 0)
    print(f'MLP loss={loss}, activation={activation}')

    # Test backward pass
    test_block_grad(seq_mlp, x_test, y=labels)
MLP loss=2.309244155883789, activation=relu
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
MLP loss=2.3934404850006104, activation=sigmoid
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000

If the above tests passed then congratulations - you've now implemented an arbitrarily deep model and loss function with end-to-end automatic differentiation!

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [14]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Suppose we have a linear (i.e. fully-connected) layer with a weight tensor $\mat{W}$, defined with in_features=1024 and out_features=512. We apply this layer to an input tensor $\mat{X}$ containing a batch of N=64 samples. The output of the layer is denoted as $\mat{Y}$.

  1. Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{X}}$ of the output of the layer w.r.t. the input $\mat{X}$.

    1. What is the shape of this tensor?
    2. Is this Jacobian sparse (most elements zero by definition)? If so, why and which elements?
    3. Given the gradient of the output w.r.t. some downstream scalar loss $L$, $\delta\mat{Y}=\pderiv{L}{\mat{Y}}$, do we need to materialize the above Jacobian in order to calculate the downstream gratdient w.r.t. to the input ($\delta\mat{X}$)? If yes, explain why; if no, show how to calcualte it without materializing the Jacobian.
  2. Consider the Jacobian tensor $\pderiv{\mat{Y}}{\mat{W}}$ of the output of the layer w.r.t. the layer weights $\mat{W}$. Answer questions A-C about it as well.

In [15]:
display_answer(hw2.answers.part1_q1)

Your answer: 1.The matrix \textbf{X} is of the shape 64x1024 and \textbf{W} is of shape of 512x1024 so according to our linear layer $\displaystyle \mathbf{Y} =\mathbf{XW}^{T}$ is of shape 64x512 and can be written as a sum of $\displaystyle \mathbf{Y}_{jk} =\sum \mathbf{X}_{ji}\mathbf{W}_{ik}^{T}$

so $\displaystyle \left(\frac{\partial \mathbf{Y}}{\partial \mathbf{X}}\right)_{j,k,m,n} =\frac{\partial Y_{jk}}{\partial X_{m}{}_{n}} =\frac{\partial }{\partial X_{m}{}_{n}}\sum X_{ji} W_{ik}^{T} =\sum \frac{\mathbf{\partial }}{\partial \mathbf{X}_{m}{}_{n}} X_{ji} W_{ik}^{T} =\delta _{jm} \delta _{in} W_{ik}^{T} =\delta _{jm} W_{kn}$

A. the tensor is a 4D tensor of shape 64x512x64x1024

B. as we can see from our calculation we have a kroniker delta which zero out the element where

the row of the element Y is not the same as the row of the element X. that is why the Jacobian is sparse.

C. No we don't, by using the chain rule technique we can write \begin{equation*} \frac{\partial L}{\partial \mathbf{X}} =\frac{\partial L}{\partial \mathbf{Y}}\frac{\partial \mathbf{Y}}{\partial \mathbf{X}} =\sum _{j,k} ,\frac{\partial L}{\partial Y_{j}{}_{k}}\frac{\partial Y_{j}{}_{k}}{\partial X_{m}{}_{n}} =\sum _{j,k}\frac{\partial L}{\partial Y_{j}{}_{k}} \delta _{jm} W_{kn} =\sum _{k}\frac{\partial L}{\partial Y_{mk}} W_{kn} =\delta \mathbf{Y} \cdot \mathbf{W} \end{equation*} So we only need to multiply by $\displaystyle \mathbf{W}$

  1. Let's repeat the calculation for $\displaystyle \frac{\partial \mathbf{Y}}{\partial \mathbf{W}}$:
\begin{equation*} \left(\frac{\partial \mathbf{Y}}{\partial \mathbf{W}}\right)_{j,k,m,n} =\frac{\partial Y_{jk}}{\partial W_{m}{}_{n}} =\frac{\partial }{\partial W_{m}{}_{n}}\sum X_{ji} W_{ik}^{T} =\sum X_{ji}\frac{\partial }{\partial W_{nm}^{T}} W_{ik}^{T} =\delta _{in} \delta _{km} X_{ji} =\delta _{km} X_{jn} \end{equation*}

A. the tensor is a 4D tensor of shape 64x512x512x1024

B. as we can see from our calculation we have a kroniker delta which zero out the element where

the column of the element Y is not the same as the row of the element W. that is why the Jacobian is sparse.

C. No we don't, by using the chain rule technique we can write \begin{equation*} \frac{\partial L}{\partial \mathbf{W}} =\frac{\partial L}{\partial \mathbf{Y}}\frac{\partial \mathbf{Y}}{\partial \mathbf{W}} =\sum _{j,k} ,\frac{\partial L}{\partial Y_{j}{}_{k}}\frac{\partial Y_{j}{}_{k}}{\partial W_{m}{}_{n}} =\sum _{j,k}\frac{\partial L}{\partial Y_{j}{}_{k}} \delta _{km} X_{jn} =\sum _{j} X_{nj}^{T}\frac{\partial L}{\partial Y_{j}{}_{m}} =\mathbf{X}^{T} \cdot \delta \mathbf{Y} \end{equation*} So we only need to multiply by $\displaystyle \mathbf{X}^{T}$

Question 2¶

Is back-propagation required in order to train neural networks with decent-based optimization? Why or why not?

In [16]:
display_answer(hw2.answers.part1_q2)

Your answer: 2. No, we can use other techniques as we learned in the tutorial to train neural networks with descent based optimization. For example, Forward/Reverse mode AD or even calculating the gradients by hand specific to our model (and using the equation of the gradient after each forward phase) or even using the likelihood ratio (LR) method (https://arxiv.org/abs/2305.08960).the reason why backpropagation is so popular technique is that it allows us to use the power of dynamic programming to calculate gradients efficiently with computational graphs.

$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 2: Optimization and Training¶

In this part we will learn how to implement optimization algorithms for deep networks. Additionally, we'll learn how to write training loops and implement a modular model trainer. We'll use our optimizers and training code to test a few configurations for classifying images with an MLP model.

In [1]:
import os
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Implementing Optimization Algorithms¶

In the context of deep learning, an optimization algorithm is some method of iteratively updating model parameters so that the loss converges toward some local minimum (which we hope will be good enough).

Gradient descent-based methods are by far the most popular algorithms for optimization of neural network parameters. However the high-dimensional loss-surfaces we encounter in deep learning applications are highly non-convex. They may be riddled with local minima, saddle points, large plateaus and a host of very challenging "terrain" for gradient-based optimization. This gave rise to many different methods of performing the parameter updates based on the loss gradients, aiming to tackle these optimization challenges.

The most basic gradient-based update rule can be written as,

$$ \vec{\theta} \leftarrow \vec{\theta} - \eta \nabla_{\vec{\theta}} L(\vec{\theta}; \mathcal{D}) $$

where $\mathcal{D} = \left\{ (\vec{x}^i, \vec{y}^i) \right\}_{i=1}^{M}$ is our training dataset or part of it. Specifically, if we have in total $N$ training samples, then

  • If $M=N$ this is known as regular gradient descent. If the dataset does not fit in memory the gradient of this loss becomes infeasible to compute.
  • If $M=1$, the loss is computed w.r.t. a single different sample each time. This is known as stochastic gradient descent.
  • If $1<M<N$ this is known as stochastic mini-batch gradient descent. This is the most commonly-used option.

The intuition behind gradient descent is simple: since the gradient of a multivariate function points to the direction of steepest ascent ("uphill"), we move in the opposite direction. A small step size $\eta$ known as the learning rate is required since the gradient can only serve as a first-order linear approximation of the function's behaviour at $\vec{\theta}$ (recall e.g. the Taylor expansion). However in truth our loss surface generally has nontrivial curvature caused by a high order nonlinear dependency on $\vec{\theta}$. Thus taking a large step in the direction of the gradient is actually just as likely to increase the function value.

The idea behind the stochastic versions is that by constantly changing the samples we compute the loss with, we get a dynamic error surface, i.e. it's different for each set of training samples. This is thought to generally improve the optimization since it may help the optimizer get out of flat regions or sharp local minima since these features may disappear in the loss surface of subsequent batches. The image below illustrates this. The different lines are different 1-dimensional losses for different training set-samples.

Deep learning frameworks generally provide implementations of various gradient-based optimization algorithms. Here we'll implement our own optimization module from scratch, this time keeping a similar API to the PyTorch optim package.

We define a base Optimizer class. An optimizer holds a set of parameter tensors (these are the trainable parameters of some model) and maintains internal state. It may be used as follows:

  • After the forward pass has been performed the optimizer's zero_grad() function is invoked to clear the parameter gradients computed by previous iterations.
  • After the backward pass has been performed, and gradients have been calculated for these parameters, the optimizer's step() function is invoked in order to update the value of each parameter based on it's gradient.

The exact method of update is implementation-specific for each optimizer and may depend on its internal state. In addition, adding the regularization penalty to the gradient is handled by the optimizer since it only depends on the parameter values (and not the data).

Here's the API of our Optimizer:

In [3]:
import hw2.optimizers as optimizers
help(optimizers.Optimizer)
Help on class Optimizer in module hw2.optimizers:

class Optimizer(abc.ABC)
 |  Optimizer(params)
 |  
 |  Base class for optimizers.
 |  
 |  Method resolution order:
 |      Optimizer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, params)
 |      :param params: A sequence of model parameters to optimize. Can be a
 |      list of (param,grad) tuples as returned by the Layers, or a list of
 |      pytorch tensors in which case the grad will be taken from them.
 |  
 |  step(self)
 |      Updates all the registered parameter values based on their gradients.
 |  
 |  zero_grad(self)
 |      Sets the gradient of the optimized parameters to zero (in place).
 |  
 |  ----------------------------------------------------------------------
 |  Readonly properties defined here:
 |  
 |  params
 |      :return: A sequence of parameter tuples, each tuple containing
 |      (param_data, param_grad). The data should be updated in-place
 |      according to the grad.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'step'})

Vanilla SGD with Regularization¶

Let's start by implementing the simplest gradient based optimizer. The update rule will be exacly as stated above, but we'll also add a L2-regularization term to the gradient. Remember that in the loss function, the L2 regularization term is expressed by

$$R(\vec{\theta}) = \frac{1}{2}\lambda||\vec{\theta}||^2_2.$$

TODO: Complete the implementation of the VanillaSGD class in the hw2/optimizers.py module.

In [4]:
# Test VanillaSGD
torch.manual_seed(42)
p = torch.randn(500, 10)
dp = torch.randn(*p.shape)*2
params = [(p, dp)]

vsgd = optimizers.VanillaSGD(params, learn_rate=0.5, reg=0.1)
vsgd.step()

expected_p = torch.load('tests/assets/expected_vsgd.pt')
diff = torch.norm(p-expected_p).item()
print(f'diff={diff}')
test.assertLess(diff, 1e-3)
diff=1.0932822078757454e-06

Training¶

Now that we can build a model and loss function, compute their gradients and we have an optimizer, we can finally do some training!

In the spirit of more modular software design, we'll implement a class that will aid us in automating the repetitive training loop code that we usually write over and over again. This will be useful for both training our Layer-based models and also later for training PyTorch nn.Modules.

Here's our Trainer API:

In [5]:
import hw2.training as training
help(training.Trainer)
Help on class Trainer in module hw2.training:

class Trainer(abc.ABC)
 |  Trainer(model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
 |  
 |  A class abstracting the various tasks of training models.
 |  
 |  Provides methods at multiple levels of granularity:
 |  - Multiple epochs (fit)
 |  - Single epoch (train_epoch/test_epoch)
 |  - Single batch (train_batch/test_batch)
 |  
 |  Method resolution order:
 |      Trainer
 |      abc.ABC
 |      builtins.object
 |  
 |  Methods defined here:
 |  
 |  __init__(self, model: torch.nn.modules.module.Module, device: Union[torch.device, NoneType] = None)
 |      Initialize the trainer.
 |      :param model: Instance of the model to train.
 |      :param device: torch.device to run training on (CPU or GPU).
 |  
 |  fit(self, dl_train: torch.utils.data.dataloader.DataLoader, dl_test: torch.utils.data.dataloader.DataLoader, num_epochs: int, checkpoints: str = None, early_stopping: int = None, print_every: int = 1, **kw) -> cs236781.train_results.FitResult
 |      Trains the model for multiple epochs with a given training set,
 |      and calculates validation loss over a given validation set.
 |      :param dl_train: Dataloader for the training set.
 |      :param dl_test: Dataloader for the test set.
 |      :param num_epochs: Number of epochs to train for.
 |      :param checkpoints: Whether to save model to file every time the
 |          test set accuracy improves. Should be a string containing a
 |          filename without extension.
 |      :param early_stopping: Whether to stop training early if there is no
 |          test loss improvement for this number of epochs.
 |      :param print_every: Print progress every this number of epochs.
 |      :return: A FitResult object containing train and test losses per epoch.
 |  
 |  save_checkpoint(self, checkpoint_filename: str)
 |      Saves the model in it's current state to a file with the given name (treated
 |      as a relative path).
 |      :param checkpoint_filename: File name or relative path to save to.
 |  
 |  test_batch(self, batch) -> cs236781.train_results.BatchResult
 |      Runs a single batch forward through the model and calculates loss.
 |      :param batch: A single batch of data  from a data loader (might
 |          be a tuple of data and labels or anything else depending on
 |          the underlying dataset.
 |      :return: A BatchResult containing the value of the loss function and
 |          the number of correctly classified samples in the batch.
 |  
 |  test_epoch(self, dl_test: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
 |      Evaluate model once over a test set (single epoch).
 |      :param dl_test: DataLoader for the test set.
 |      :param kw: Keyword args supported by _foreach_batch.
 |      :return: An EpochResult for the epoch.
 |  
 |  train_batch(self, batch) -> cs236781.train_results.BatchResult
 |      Runs a single batch forward through the model, calculates loss,
 |      preforms back-propagation and updates weights.
 |      :param batch: A single batch of data  from a data loader (might
 |          be a tuple of data and labels or anything else depending on
 |          the underlying dataset.
 |      :return: A BatchResult containing the value of the loss function and
 |          the number of correctly classified samples in the batch.
 |  
 |  train_epoch(self, dl_train: torch.utils.data.dataloader.DataLoader, **kw) -> cs236781.train_results.EpochResult
 |      Train once over a training set (single epoch).
 |      :param dl_train: DataLoader for the training set.
 |      :param kw: Keyword args supported by _foreach_batch.
 |      :return: An EpochResult for the epoch.
 |  
 |  ----------------------------------------------------------------------
 |  Data descriptors defined here:
 |  
 |  __dict__
 |      dictionary for instance variables (if defined)
 |  
 |  __weakref__
 |      list of weak references to the object (if defined)
 |  
 |  ----------------------------------------------------------------------
 |  Data and other attributes defined here:
 |  
 |  __abstractmethods__ = frozenset({'test_batch', 'train_batch'})

The Trainer class splits the task of training (and evaluating) models into three conceptual levels,

  • Multiple epochs - the fit method, which returns a FitResult containing losses and accuracies for all epochs.
  • Single epoch - the train_epoch and test_epoch methods, which return an EpochResult containing losses per batch and the single accuracy result of the epoch.
  • Single batch - the train_batch and test_batch methods, which return a BatchResult containing a single loss and the number of correctly classified samples in the batch.

It implements the first two levels. Inheriting classes are expected to implement the single-batch level methods since these are model and/or task specific.

The first thing we should do in order to verify our model, gradient calculations and optimizer implementation is to try to overfit a large model (many parameters) to a small dataset (few images). This will show us that things are working properly.

Let's begin by loading the CIFAR-10 dataset.

In [6]:
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())

print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')
Files already downloaded and verified
Files already downloaded and verified
Train: 50000 samples
Test: 10000 samples

Now, let's implement just a small part of our training logic since that's what we need right now.

TODO:

  1. Complete the implementation of the train_batch() method in the LayerTrainer class within the hw2/training.py module.
  2. Update the hyperparameter values in the part2_overfit_hp() function in the hw2/answers.py module. Tweak the hyperparameter values until your model overfits a small number of samples in the code block below. You should get 100% accuracy within a few epochs.

The following code block will use your custom Layer-based MLP implentation, custom Vanilla SGD and custom trainer to overfit the data. The classification accuracy should be 100% within a few epochs.

In [7]:
import hw2.layers as layers
import hw2.answers as answers
from torch.utils.data import DataLoader

# Overfit to a very small dataset of 20 samples
batch_size = 10
max_batches = 2
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)

# Get hyperparameters
hp = answers.part2_overfit_hp()

torch.manual_seed(seed)

# Build a model and loss using our custom MLP and CE implementations
model = layers.MLP(3*32*32, num_classes=10, hidden_features=[128]*3, wstd=hp['wstd'])
loss_fn = layers.CrossEntropyLoss()

# Use our custom optimizer
optimizer = optimizers.VanillaSGD(model.params(), learn_rate=hp['lr'], reg=hp['reg'])

# Run training over small dataset multiple times
trainer = training.LayerTrainer(model, loss_fn, optimizer)
best_acc = 0
for i in range(20):
    res = trainer.train_epoch(dl_train, max_batches=max_batches)
    best_acc = res.accuracy if res.accuracy > best_acc else best_acc
    
test.assertGreaterEqual(best_acc, 98)

Now that we know training works, let's try to fit a model to a bit more data for a few epochs, to see how well we're doing. First, we need a function to plot the FitResults object.

In [8]:
from cs236781.plot import plot_fit
plot_fit?

TODO:

  1. Complete the implementation of the test_batch() method in the LayerTrainer class within the hw2/training.py module.
  2. Implement the fit() method of the Trainer class within the hw2/training.py module.
  3. Tweak the hyperparameters for this section in the part2_optim_hp() function in the hw2/answers.py module.
  4. Run the following code blocks to train. Try to get above 35-40% test-set accuracy.
In [9]:
# Define a larger part of the CIFAR-10 dataset (still not the whole thing)
batch_size = 50
max_batches = 100
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size//2, shuffle=False)
In [10]:
# Define a function to train a model with our Trainer and various optimizers
def train_with_optimizer(opt_name, opt_class, fig):
    torch.manual_seed(seed)
    
    # Get hyperparameters
    hp = answers.part2_optim_hp()
    hidden_features = [128] * 5
    num_epochs = 10
    
    # Create model, loss and optimizer instances
    model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'])
    loss_fn = layers.CrossEntropyLoss()
    optimizer = opt_class(model.params(), learn_rate=hp[f'lr_{opt_name}'], reg=hp['reg'])

    # Train with the Trainer
    trainer = training.LayerTrainer(model, loss_fn, optimizer)
    fit_res = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches)
    
    fig, axes = plot_fit(fit_res, fig=fig, legend=opt_name)
    return fig
In [11]:
fig_optim = None
fig_optim = train_with_optimizer('vanilla', optimizers.VanillaSGD, fig_optim)
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---

Momentum¶

The simple vanilla SGD update is rarely used in practice since it's very slow to converge relative to other optimization algorithms.

One reason is that naïvely updating in the direction of the current gradient causes it to fluctuate wildly in areas where the loss surface in some dimensions is much steeper than in others. Another reason is that using the same learning rate for all parameters is not a great idea since not all parameters are created equal. For example, parameters associated with rare features should be updated with a larger step than ones associated with commonly-occurring features because they'll get less updates through the gradients.

Therefore more advanced optimizers take into account the previous gradients of a parameter and/or try to use a per-parameter specific learning rate instead of a common one.

Let's now implement a simple and common optimizer: SGD with Momentum. This optimizer takes previous gradients of a parameter into account when updating it's value instead of just the current one. In practice it usually provides faster convergence than the vanilla SGD.

The SGD with Momentum update rule can be stated as follows: $$\begin{align} \vec{v}_{t+1} &= \mu \vec{v}_t - \eta \delta \vec{\theta}_t \\ \vec{\theta}_{t+1} &= \vec{\theta}_t + \vec{v}_{t+1} \end{align}$$

Where $\eta$ is the learning rate, $\vec{\theta}$ is a model parameter, $\delta \vec{\theta}_t=\pderiv{L}{\vec{\theta}}(\vec{\theta}_t)$ is the gradient of the loss w.r.t. to the parameter and $0\leq\mu<1$ is a hyperparameter known as momentum.

Expanding the update rule recursively shows us now the parameter update infact depends on all previous gradient values for that parameter, where the old gradients are exponentially decayed by a factor of $\mu$ at each timestep.

Since we're incorporating previous gradient (update directions), a noisy value of the current gradient will have less effect so that the general direction of previous updates is maintained somewhat. The following figure illustrates this.

TODO:

  1. Complete the implementation of the MomentumSGD class in the hw2/optimizers.py module.
  2. Tweak the learning rate for momentum in part2_optim_hp() the function in the hw2/answers.py module.
  3. Run the following code block to compare to the vanilla SGD.
In [12]:
fig_optim = train_with_optimizer('momentum', optimizers.MomentumSGD, fig_optim)
fig_optim
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---
Out[12]:

Bonus: RMSProp¶

This is another optmizer that accounts for previous gradients, but this time it uses them to adapt the learning rate per parameter.

RMSProp maintains a decaying moving average of previous squared gradients, $$ \vec{r}_{t+1} = \gamma\vec{r}_{t} + (1-\gamma)\delta\vec{\theta}_t^2 $$ where $0<\gamma<1$ is a decay constant usually set close to $1$, and $\delta\vec{\theta}_t^2$ denotes element-wise squaring.

The update rule for each parameter is then, $$ \vec{\theta}_{t+1} = \vec{\theta}_t - \left( \frac{\eta}{\sqrt{r_{t+1}+\varepsilon}} \right) \delta\vec{\theta}_t $$

where $\varepsilon$ is a small constant to prevent numerical instability. The idea here is to decrease the learning rate for parameters with high gradient values and vice-versa. The decaying moving average prevents accumulating all the past gradients which would cause the effective learning rate to become zero.

Bonus:

  1. Complete the implementation of the RMSProp class in the hw2/optimizers.py module.
  2. Tweak the learning rate for RMSProp in part2_optim_hp() the function in the hw2/answers.py module.
  3. Run the following code block to compare to the other optimizers.
In [13]:
fig_optim = train_with_optimizer('rmsprop', optimizers.RMSProp, fig_optim)
fig_optim
--- EPOCH 1/10 ---
--- EPOCH 2/10 ---
--- EPOCH 3/10 ---
--- EPOCH 4/10 ---
--- EPOCH 5/10 ---
--- EPOCH 6/10 ---
--- EPOCH 7/10 ---
--- EPOCH 8/10 ---
--- EPOCH 9/10 ---
--- EPOCH 10/10 ---
Out[13]:

Note that you should get better train/test accuracy with Momentum and RMSProp than Vanilla.

Dropout Regularization¶

Dropout is a useful technique to improve generalization of deep models.

The idea is simple: during the forward pass drop, i.e. set to to zero, the activation of each neuron, with a probability of $p$. For example, if $p=0.4$ this means we drop the activations of 40% of the neurons (on average).

There are a few important things to note about dropout:

  1. It is only performed during training. When testing our model the dropout layers should be a no-op.
  2. In the backward pass, gradients are only propagated back into neurons that weren't dropped during the forward pass.
  3. During testing, the activations must be scaled since the expected value of each neuron during the training phase is now $1-p$ times it's original expectation. Thus, we need to scale the test-time activations by $1-p$ to match. Equivalently, we can scale the train time activations by $1/(1-p)$.

TODO:

  1. Complete the implementation of the Dropout class in the hw2/layers.py module.
  2. Finish the implementation of the MLP's __init__() method in the hw2/layers.py module. If dropout>0 you should add a Dropout layer after each ReLU.
In [14]:
from hw2.grad_compare import compare_layer_to_torch

# Check architecture of MLP with dropout layers
mlp_dropout = layers.MLP(in_features, num_classes, [50]*3, dropout=0.6)
print(mlp_dropout)
test.assertEqual(len(mlp_dropout.sequence), 10)
for b1, b2 in zip(mlp_dropout.sequence, mlp_dropout.sequence[1:]):
    if str(b1).lower() == 'relu':
        test.assertTrue(str(b2).startswith('Dropout'))
test.assertTrue(str(mlp_dropout.sequence[-1]).startswith('Linear'))
MLP, Sequential
	[0] Linear(self.in_features=3072, self.out_features=50)
	[1] ReLU
	[2] Dropout(p=0.6)
	[3] Linear(self.in_features=50, self.out_features=50)
	[4] ReLU
	[5] Dropout(p=0.6)
	[6] Linear(self.in_features=50, self.out_features=50)
	[7] ReLU
	[8] Dropout(p=0.6)
	[9] Linear(self.in_features=50, self.out_features=10)

In [15]:
# Test end-to-end gradient in train and test modes.
print('Dropout, train mode')
mlp_dropout.train(True)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
    test.assertLess(diff, 1e-3)
    
print('Dropout, test mode')
mlp_dropout.train(False)
for diff in compare_layer_to_torch(mlp_dropout, torch.randn(500, in_features)):
    test.assertLess(diff, 1e-3)
Dropout, train mode
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000
Dropout, test mode
Comparing gradients... 
input    diff=0.000
param#01 diff=0.000
param#02 diff=0.000
param#03 diff=0.000
param#04 diff=0.000
param#05 diff=0.000
param#06 diff=0.000
param#07 diff=0.000
param#08 diff=0.000

To see whether dropout really improves generalization, let's take a small training set (small enough to overfit) and a large test set and check whether we get less overfitting and perhaps improved test-set accuracy when using dropout.

In [16]:
# Define a small set from CIFAR-10, but take a larger test set since we want to test generalization
batch_size = 10
max_batches = 40
in_features = 3*32*32
num_classes = 10
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)
dl_test = torch.utils.data.DataLoader(ds_test, batch_size*2, shuffle=False)

TODO: Tweak the hyperparameters for this section in the part2_dropout_hp() function in the hw2/answers.py module. Try to set them so that the first model (with dropout=0) overfits. You can disable the other dropout options until you tune the hyperparameters. We can then see the effect of dropout for generalization.

In [17]:
# Get hyperparameters
hp = answers.part2_dropout_hp()
hidden_features = [400] * 1
num_epochs = 30
In [18]:
torch.manual_seed(seed)
fig=None
#for dropout in [0]:  # Use this for tuning the hyperparms until you overfit
for dropout in [0, 0.4, 0.8]:
    model = layers.MLP(in_features, num_classes, hidden_features, wstd=hp['wstd'], dropout=dropout)
    loss_fn = layers.CrossEntropyLoss()
    optimizer = optimizers.MomentumSGD(model.params(), learn_rate=hp['lr'], reg=0)

    print('*** Training with dropout=', dropout)
    trainer = training.LayerTrainer(model, loss_fn, optimizer)
    fit_res_dropout = trainer.fit(dl_train, dl_test, num_epochs, max_batches=max_batches, print_every=6)
    fig, axes = plot_fit(fit_res_dropout, fig=fig, legend=f'dropout={dropout}', log_loss=True)
*** Training with dropout= 0
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---
*** Training with dropout= 0.4
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---
*** Training with dropout= 0.8
--- EPOCH 1/30 ---
--- EPOCH 7/30 ---
--- EPOCH 13/30 ---
--- EPOCH 19/30 ---
--- EPOCH 25/30 ---
--- EPOCH 30/30 ---

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [19]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Regarding the graphs you got for the three dropout configurations:

  1. Explain the graphs of no-dropout vs dropout. Do they match what you expected to see?

    • If yes, explain why and provide examples based on the graphs.
    • If no, explain what you think the problem is and what should be modified to fix it.
  2. Compare the low-dropout setting to the high-dropout setting and explain based on your graphs.

In [20]:
display_answer(hw2.answers.part2_q1)

Your answer:

  1. Yes, our graph results match our expectations. Dropout layers prevent from our model to overfit

to the trained data. We can see in our graph that the model that gets the best train accuracy is the one which has not dropout layers activated but not so much not so much on the test set. So, other than acting as kind of a regularization layer, it also prevents from the network to depending on dominant features and thus allows for the network to generalize even more. When we apply too much Dropout, we prevent from the backpropagation algorithm to fine tune weights at the same speed as a non dropout network and produce under fitting and loss of expressiveness. In our results, the Dropout of 0.8 suffers from bad results in train and test accuracy. Furthermore, applying too much Dropout can make the network more sensitive to changes of hyperparameters. The model that don't overfit and at the same time gives the best generalization is the model with dropout of 0.4

Question 2¶

When training a model with the cross-entropy loss function, is it possible for the test loss to increase for a few epochs while the test accuracy also increases?

If it's possible explain how, if it's not explain why not.

In [21]:
display_answer(hw2.answers.part2_q2)

Your answer: Yes, the accuracy is basically a discrete technique to measure how well our model fits the given dataset, but the cross entropy measures how confident the model is about the classification of our input. So for cross entropy loss function \begin{equation*} L_{CE} =-\sum _{i-1}^{n} t_{i} \cdotp log( p_{i}) \ for\ n\ classes \end{equation*} The $\displaystyle p_{i}$ can decrease for many "1" binary class because of changes of model parameters, and at the same time, more correct predictions occur because of a change of delta in the probability passing the threshold and increasing the accuracy even though the model is less confident. For example, for binary classification a change from [0.9, 0.1] to [0.6, 0.4] outputs produce the same classification which won't change the accuracy, but the loss increased from 0.37 to 0.6 while but small change in other sample from [0.45, 0.55] to [0.55, 0.45] won't change the sum of all losses (given the other change remains) in the cross entropy but will make the accuracy increase.

Question 3¶

  1. Explain the difference between gradient descent and back-propagation.

  2. Compare in detail between gradient descent (GD) and stochastic gradient descent (SGD).

  3. Why is SGD used more often in the practice of deep learning? Provide a few justifications.

  4. You would like to try GD to train your model instead of SGD, but you're concerned that your dataset won't fit in memory. A friend suggested that you should split the data into disjoint batches, do multiple forward passes until all data is exhausted, and then do one backward pass on the sum of the losses.

    1. Would this approach produce a gradient equivalent to GD? Why or why not? provide mathematical justification for your answer.
    2. You implemented the suggested approach, and were careful to use batch sizes small enough so that each batch fits in memory. However, after some number of batches you got an out of memory error. What happened?
In [22]:
display_answer(hw2.answers.part2_q3)

Your answer: 3.

  1. backpropagation is used to calculate the model parameters by doing a forward pass and using saved data for propagating backward up to the input layer and getting the gradients by chain rule multiplications.

Gradient descent uses the given gradient to make a step or do a small change in the model parameters (weight and biases) to reduce the error on the given cost function in a hope to make the model suitable for prediction on unseen data. 2. in GD the gradients are computed using the entire training data, while in SGD we first divide the training data into samples or mini batches. Then in each iteration, we pick one sample and make a gradient step based on it. Because in SGD we don't use the entire dataset, SGD is reducing the computation burden. The second thing is that the parameters are frequently updates relative to GD which converges slowly, and it is more stable but not used so often for large datasets 3. in deep learning, we are most of the time are provided with huge datasets. Doing GD on the entire dataset is computationally slower and requires huge amounts of memory to hold all the results, and it is prone to numerical errors due to large amount of algebraic operations. That is why SGD is more popular in deep learning because this technique solves those problems by using mini batches of samples to calculate the gradients, which helps us to train the network quicker and with less numerical errors and not using all the memory available to us. 4. A. We can use this approach only when our cost function is linear. Let's say we use an empirical risk function $\displaystyle C=\sum _{i} L( h( x_{i}) ,y_{i}) =\sum _{i} L_{( i)}$ and define $\displaystyle C_{( i)}$ which compose of mini batch to calculate the loss with respect to this batch: \begin{equation*} \sum _{i=1}^{n}\frac{\partial }{\partial \mathbf{\theta }} L_{( i)} =\frac{\partial }{\partial \mathbf{\theta }}\sum _{i=1}^{n_{1}} L_{( i)} +...+\frac{\partial }{\partial \mathbf{\theta }}\sum _{i=n_{m-1} +1}^{n_{m}} L_{( i)} =\frac{\partial }{\partial \mathbf{\theta }} C_{( 1)} +...+\frac{\partial }{\partial \mathbf{\theta }} C_{( m)} =\frac{\partial }{\partial \mathbf{\theta }}( C_{( 1)} +...+C_{( m)}) =\frac{\partial }{\partial \mathbf{\theta }} C \end{equation*} So, as we can see, there are different ways to calculate the cost only if we use linear cost function

B. For each batch, we need to store the result of the forward pass. Next, we put together all the relevant data for each batch in memory in order to make a backward pass on it, so we get a memory exhaustion.

Question 4 (Automatic Differentiation)¶

Let $f = f_n \circ f_{n-1} \circ ... \circ f_1$ where each $f_i: \mathbb{R} \rightarrow \mathbb{R}$ is a differentiable function which is easy to evaluate and differentiate (each query costs $\mathcal{O}(1)$ at a given point).

  1. In this exercise you will reduce the memory complexity for evaluating $\nabla f (x_0)$ at some point $x_0$.

Assume that you are given with $f$ already expressed as a computational graph and a point $x_0$. 1. Show how to reduce the memory complexity for computing the gradient using forward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity? 2. Show how to reduce the memory complexity for computing the gradient using backward mode AD (maintaining the $\mathcal{O}(n)$ computation cost). What is the memory complexity? 2. Can these techniques be generalized for arbitrary computational graphs? 3. Think how the backprop algorithm can benefit from these techniques when applied to deep architectures (e.g VGGs, ResNets).

In [23]:
display_answer(hw2.answers.part2_q4)

Your answer: 4.

  1. We can do a classic forward mode AD $\displaystyle v_{j+1} .grad\leftarrow v_{j+1} .fn.derivative( v_{j} .val) \cdotp v_{j} .grad$

to reduce the memory complexity to $\displaystyle O( 1)$ we will use a single variable for grad calculation and single variable for input from last node, so the equation can be reduced to \begin{equation*} Grad\leftarrow v_{j+1} .fn.derivative( Input) \cdotp Grad \end{equation*} where Grad=1 and Input=$\displaystyle v_{0} .val$

  1. We can do a classic backward mode AD $\displaystyle v_{j-1} .grad\leftarrow v_{j} .fn.derivative( v_{j-1} .val) \cdotp v_{j} .grad$

to reduce the memory complexity to $\displaystyle O( 1)$ we will use a single variable for grad calculation and single variable for input from last node, so the equation can be reduced to \begin{equation*} Grad\leftarrow v_{j} .fn.derivative( Input) \cdotp Grad \end{equation*} where Grad=1 and Input=$\displaystyle F_{n}( x)$

At first thought for a general computational graph with multiple input nodes, we will need 2 variables for each input node. So we get a $\displaystyle O( n)$ memory complexity, but we can instead traverse from input to output node path for each gradient element, lowering the memory complexity to $\displaystyle O( 1)$. The problem is that this cannot be used when trying to make parallel computation without going back to 2n variables.

  1. this technique helps us to fine tune minimal amount of parameters for our model to see their effects on the model performance without using large amounts of memory as we saw. The downside is that we cannot parallel this process for parameters at the same time without using $\displaystyle O( n)$ memory.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 3: Binary Classification with Multilayer Perceptrons¶

In this part we'll implement a general purpose MLP and Binary Classifier using pytorch. We'll implement its training, and also learn about decision boundaries an threshold selection in the context of binary classification. Finally, we'll explore the effect of depth and width on an MLP's performance.

In [1]:
import os
import re
import sys
import glob
import unittest
from typing import Sequence, Tuple

import sklearn
import numpy as np
import matplotlib.pyplot as plt
import torch
import torchvision
import torch.nn as nn
import torchvision.transforms as tvtf
from torch import Tensor

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Synthetic Dataset¶

To test our first neural network-based classifiers we'll start by creating a toy binary classification dataset, but one which is not trivial for a linear model.

In [3]:
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
In [4]:
def rotate_2d(X, deg=0):
    """
    Rotates each 2d sample in X of shape (N, 2) by deg degrees.
    """
    a = np.deg2rad(deg)
    return X @ np.array([[np.cos(a), -np.sin(a)],[np.sin(a), np.cos(a)]]).T

def plot_dataset_2d(X, y, n_classes=2, alpha=0.2, figsize=(8, 6), title=None, ax=None):
    if ax is None:
        fig, ax = plt.subplots(1, 1, figsize=figsize)
    for c in range(n_classes):
        ax.scatter(*X[y==c,:].T, alpha=alpha, label=f"class {c}");
        
    ax.set_xlabel("$x_1$"); ax.set_ylabel("$x_2$");
    ax.legend(); ax.set_title((title or '') + f" (n={len(y)})")

We'll split our data into 80% train and validation, and 20% test. To make it a bit more challenging, we'll simulate a somewhat real-world setting where there are multiple populations, and the training/validation data is not sampled iid from the underlying data distribution.

In [5]:
np.random.seed(seed)

N = 10_000
N_train = int(N * .8)

# Create data from two different distributions for the training/validation
X1, y1 = make_moons(n_samples=N_train//2, noise=0.2)
X1 = rotate_2d(X1, deg=10)
X2, y2 = make_moons(n_samples=N_train//2, noise=0.25)
X2 = rotate_2d(X2, deg=50)

# Test data comes from a similar but noisier distribution
X3, y3 = make_moons(n_samples=(N-N_train), noise=0.3)
X3 = rotate_2d(X3, deg=40)

X, y = np.vstack([X1, X2, X3]), np.hstack([y1, y2, y3])
In [6]:
# Train and validation data is from mixture distribution
X_train, X_valid, y_train, y_valid = train_test_split(X[:N_train, :], y[:N_train], test_size=1/3, shuffle=False)

# Test data is only from the second distribution
X_test, y_test = X[N_train:, :], y[N_train:]

fig, ax = plt.subplots(1, 3, figsize=(20, 5))
plot_dataset_2d(X_train, y_train, title='Train', ax=ax[0]);
plot_dataset_2d(X_valid, y_valid, title='Validation', ax=ax[1]);
plot_dataset_2d(X_test, y_test, title='Test', ax=ax[2]);

Now let us create a data loader for each dataset.

In [7]:
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader

batch_size = 32

dl_train, dl_valid, dl_test = [
    DataLoader(
        dataset=TensorDataset(
            torch.from_numpy(X_).to(torch.float32),
            torch.from_numpy(y_)
        ),
        shuffle=True,
        num_workers=0,
        batch_size=batch_size
    )
    for X_, y_ in [(X_train, y_train), (X_valid, y_valid), (X_test, y_test)]
]

print(f'{len(dl_train.dataset)=}, {len(dl_valid.dataset)=}, {len(dl_test.dataset)=}')
len(dl_train.dataset)=5333, len(dl_valid.dataset)=2667, len(dl_test.dataset)=2000

Simple MLP¶

A multilayer-perceptron is arguably a the most basic type of neural network model. It is composed of $L$ layers, each layer $l$ with $n_l$ perceptron ("neuron") units. Each perceptron is connected to all ouputs of the previous layer (or all inputs in the first layer), calculates their weighted sum, applies a linearity and produces a single output.

Each layer $l$ operates on the output of the previous layer ($\vec{y}_{l-1}$) and calculates:

$$ \vec{y}_l = \varphi\left( \mat{W}_l \vec{y}_{l-1} + \vec{b}_l \right),~ \mat{W}_l\in\set{R}^{n_{l}\times n_{l-1}},~ \vec{b}_l\in\set{R}^{n_l},~ l \in \{1,2,\dots,L\}. $$
  • Note that both input and output are vectors. We can think of the above equation as describing a layer of multiple perceptrons.
  • We'll henceforth refer to such layers as fully-connected or FC layers.
  • The first layer accepts the input of the model, i.e. $\vec{y}_0=\vec{x}\in\set{R}^d$.
  • The last layer, $L$, is the output layer, so $y_L$ is the output of the model.
  • The layers $1, 2, \dots, L-1$ are called hidden layers.

To begin, let's implement a general multi-layer perceptron model. We'll seek to implement it in a way which is both general in terms of architecture, and also composable so that we can use our MLP in the context of larger models.

TODO: Implement the MLP class in the hw2/mlp.py module.

In [8]:
from hw2.mlp import MLP

mlp = MLP(
    in_dim=2,
    dims=[8, 16, 32, 64],
    nonlins=['relu', 'tanh', nn.LeakyReLU(0.314), 'softmax']
)
mlp
Out[8]:
MLP(
  (model): ModuleList(
    (0): Linear(in_features=2, out_features=8, bias=True)
    (1): ReLU()
    (2): Linear(in_features=8, out_features=16, bias=True)
    (3): Tanh()
    (4): Linear(in_features=16, out_features=32, bias=True)
    (5): LeakyReLU(negative_slope=0.314)
    (6): Linear(in_features=32, out_features=64, bias=True)
    (7): Softmax(dim=1)
  )
)

Let's try our implementation on a batch of data.

In [9]:
x0, y0 = next(iter(dl_train))

yhat0 = mlp(x0)

test.assertEqual(len([*mlp.parameters()]), 8)
test.assertEqual(yhat0.shape, (batch_size, mlp.out_dim))
test.assertTrue(torch.allclose(torch.sum(yhat0, dim=1), torch.tensor(1.0)))
test.assertIsNotNone(yhat0.grad_fn)

yhat0
Out[9]:
tensor([[0.0148, 0.0140, 0.0139,  ..., 0.0169, 0.0191, 0.0196],
        [0.0152, 0.0142, 0.0138,  ..., 0.0163, 0.0187, 0.0199],
        [0.0158, 0.0145, 0.0130,  ..., 0.0166, 0.0187, 0.0200],
        ...,
        [0.0168, 0.0148, 0.0121,  ..., 0.0156, 0.0169, 0.0204],
        [0.0152, 0.0142, 0.0138,  ..., 0.0174, 0.0197, 0.0198],
        [0.0162, 0.0146, 0.0127,  ..., 0.0162, 0.0180, 0.0201]],
       grad_fn=<SoftmaxBackward0>)

MLP for Binary Classification¶

The MLP model we've implemented, while useful, is very general. For the task of binary classification, we would like to add some additional functionality to it: the ability to output a normalized score for a sample being in class one (which we interpret as a probability) and a prediction based on some threshold of this probability. In addition, we need some way to calculate a meaningful threshold based on the data and a trained model at hand.

In order to maintain generality, we'll add this functionlity in the form of a wrapper: A BinaryClassifier class that can wrap any model producing two output features, and provide the the functionality stated above.

TODO: In the hw2/classifier.py module, implement the BinaryClassifier and the missing parts of its base class, Classifier. Read the method documentation carefully and implement accordingly. You can ignore the roc_threshold method at this stage.

In [10]:
from hw2.classifier import BinaryClassifier

bmlp4 = BinaryClassifier(
    model=MLP(in_dim=2, dims=[*[10]*3, 2], nonlins=[*['relu']*3, 'none']),
    threshold=0.5
)
print(bmlp4)

# Test model
test.assertEqual(len([*bmlp4.parameters()]), 8)
test.assertIsNotNone(bmlp4(x0).grad_fn)

# Test forward
yhat0_scores = bmlp4(x0)
test.assertEqual(yhat0_scores.shape, (batch_size, 2))
test.assertFalse(torch.allclose(torch.sum(yhat0_scores, dim=1), torch.tensor(1.0)))

# Test predict_proba
yhat0_proba = bmlp4.predict_proba(x0)
test.assertEqual(yhat0_proba.shape, (batch_size, 2))
test.assertTrue(torch.allclose(torch.sum(yhat0_proba, dim=1), torch.tensor(1.0)))

# Test classify
yhat0 = bmlp4.classify(x0)
test.assertEqual(yhat0.shape, (batch_size,))
test.assertEqual(yhat0.dtype, torch.int)
test.assertTrue(all(yh_ in (0, 1) for yh_ in yhat0))
BinaryClassifier(
  (model): MLP(
    (model): ModuleList(
      (0): Linear(in_features=2, out_features=10, bias=True)
      (1): ReLU()
      (2): Linear(in_features=10, out_features=10, bias=True)
      (3): ReLU()
      (4): Linear(in_features=10, out_features=10, bias=True)
      (5): ReLU()
      (6): Linear(in_features=10, out_features=2, bias=True)
      (7): Identity()
    )
  )
  (pred): Softmax(dim=1)
)

Training¶

Now that we have a classifier, we need to train it. We will abstract the various aspects of training such as mlutiple epochs, iterating over batches, early stopping and saving model checkpoints, into a Trainer that will take care of these concerns.

The Trainer class splits the task of training (and evaluating) models into three conceptual levels,

  • Multiple epochs - the fit method, which returns a FitResult containing losses and accuracies for all epochs.
  • Single epoch - the train_epoch and test_epoch methods, which return an EpochResult containing losses per batch and the single accuracy result of the epoch.
  • Single batch - the train_batch and test_batch methods, which return a BatchResult containing a single loss and the number of correctly classified samples in the batch.

It implements the first two levels. Inheriting classes are expected to implement the single-batch level methods since these are model and/or task specific.

TODO:

  1. Implement the Trainer's fit method and the ClassifierTrainer's train_batch/test_batch methods, in the hw2/training.py module. You may ignore the Optional parts about early stopping an model checkpoints at this stage.

  2. Set the model's architecture hyper-parameters and the optimizer hyperparameters in part3_arch_hp() and part3_optim_hp(), respectively, in hw2/answers.py.

Since this is a toy dataset, you should be able to quickly get above 85% accuracy even on the test set.

In [11]:
from hw2.training import ClassifierTrainer
from hw2.answers import part3_arch_hp, part3_optim_hp

torch.manual_seed(seed)

hp_arch = part3_arch_hp()
hp_optim = part3_optim_hp()

model = BinaryClassifier(
    model=MLP(
        in_dim=2,
        dims=[*[hp_arch['hidden_dims'],]*hp_arch['n_layers'], 2],
        nonlins=[*[hp_arch['activation'],]*hp_arch['n_layers'], hp_arch['out_activation']]
    ),
    threshold=0.5,
)
print(model)

loss_fn = hp_optim.pop('loss_fn')
optimizer = torch.optim.SGD(params=model.parameters(), **hp_optim)
trainer = ClassifierTrainer(model, loss_fn, optimizer)

fit_result = trainer.fit(dl_train, dl_valid, num_epochs=20, print_every=10);

test.assertGreaterEqual(fit_result.train_acc[-1], 85.0)
test.assertGreaterEqual(fit_result.test_acc[-1], 75.0)
BinaryClassifier(
  (model): MLP(
    (model): ModuleList(
      (0): Linear(in_features=2, out_features=500, bias=True)
      (1): ReLU()
      (2): Linear(in_features=500, out_features=500, bias=True)
      (3): ReLU()
      (4): Linear(in_features=500, out_features=2, bias=True)
      (5): Identity()
    )
  )
  (pred): Softmax(dim=1)
)
--- EPOCH 1/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
--- EPOCH 11/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
--- EPOCH 20/20 ---
train_batch:   0%|          | 0/167 [00:00<?, ?it/s]
test_batch:   0%|          | 0/84 [00:00<?, ?it/s]
In [12]:
from cs236781.plot import plot_fit

plot_fit(fit_result, log_loss=False, train_test_overlay=True);

Decision Boundary¶

An important part of understanding what a non-linear classifier like our MLP is doing is visualizing it's decision boundaries. When we only have two input features, these are relatively simple to visualize, since we can simply plot our data on the plane, and evaluate our classifier on a constant 2D grid in order to approximate the decision boundary.

TODO: Implement the plot_decision_boundary_2d function in the hw2/classifier.py module.

In [13]:
from hw2.classifier import plot_decision_boundary_2d

fig, ax = plot_decision_boundary_2d(model, *dl_valid.dataset.tensors)
/home/dansdeor/miniconda3/envs/cs236781-hw/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1639180588308/work/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]

Threshold Selection¶

Another important component, especially in the context of binary classification is threshold selection. Until now, we arbitrarily chose a threshold of 0.5 when deciding the class label based on the probability score we calculated via softmax. In other words, we classified a sample to class 1 (the 'positive' class) when it's probability score was greater or equal to 0.5.

However, in real-world classifiction problems we'll need to choose our threshold wisely based on the domain-specific requirements of the problem. For example, depending on our application, we might care more about high sensitivity (correctly classifying positive examples), while for other applications specificity (correctly classifying negative examples) is more important.

One way to understand the mistakes a model is making is to look at its Confusion Matrix. From it, we easily see e.g. the false-negative rate (FNR) and false-positive rate (FPR).

Let's look at the confusion matrices on the test and validation data using the model we trained above.

In [14]:
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay

def plot_confusion(classifier, x: np.ndarray, y: np.ndarray, ax=None):
    y_hat = classifier.classify(torch.from_numpy(x).to(torch.float32)).numpy()
    conf_mat = confusion_matrix(y, y_hat, normalize='all')
    ConfusionMatrixDisplay(conf_mat).plot(ax=ax, colorbar=False)
    
model.threshold = 0.5

_, axes = plt.subplots(1, 2, figsize=(10, 5))
axes[0].set_title("Train"); axes[1].set_title("Validation");
plot_confusion(model, X_train, y_train, ax=axes[0])
plot_confusion(model, X_valid, y_valid, ax=axes[1])

We can see that the model makes a different number of false-posiive and false-negative errors. Clearly, this proportion would change if the classification threshold was different.

A very common way to select the classification threshold is to find a threshold which optimally balances between the FPR and FNR. This can be done by plotting the model's ROC curve, which shows 1-FNR vs. FPR for multiple threshold values, and selecting the point closest to the ideal point ((0, 1)).

TODO: Implement the select_roc_thresh function in the hw2.classifier module.

In [15]:
from hw2.classifier import select_roc_thresh


optimal_thresh = select_roc_thresh(model, *dl_valid.dataset.tensors, plot=True)

Let's see the effect of our threshold selection on the confusion matrix and decision boundary.

In [16]:
model.threshold = optimal_thresh

_, axes = plt.subplots(1, 2, figsize=(10, 5))
axes[0].set_title("Train"); axes[1].set_title("Validation");
plot_confusion(model, X_train, y_train, ax=axes[0])
plot_confusion(model, X_valid, y_valid, ax=axes[1])
fig, ax = plot_decision_boundary_2d(model, *dl_valid.dataset.tensors)

Architecture Experiments¶

Now, equipped with the tools we've implemented so far we'll expertiment with various MLP architectures. We'll seek to study the effect of the models depth (number of hidden layers) and width (number of neurons per hidden layer) on the its decision boundaries and the resulting performance. After training, we will use the validation set for threshold selection, and seek to maximize the performance on the test set.

TODO: Implement the mlp_experiment function in hw2/experiments.py. You are free to configure any model and optimization hyperparameters however you like, except for the specified width and depth. Experiment with various options for these other hyperparameters and try to obtain the best results you can.

In [17]:
from itertools import product
from tqdm.auto import tqdm
from hw2.experiments import mlp_experiment

torch.manual_seed(seed)

depths = [1, 2, 4]
widths = [2, 8, 32]
exp_configs = product(enumerate(widths), enumerate(depths))
fig, axes = plt.subplots(len(widths), len(depths), figsize=(10*len(depths), 10*len(widths)), squeeze=False)
test_accs = []

for (i, width), (j, depth) in tqdm(list(exp_configs)):
    model, thresh, valid_acc, test_acc = mlp_experiment(
        depth, width, dl_train, dl_valid, dl_test, n_epochs=10
    )
    test_accs.append(test_acc)
    fig, ax = plot_decision_boundary_2d(model, *dl_test.dataset.tensors, ax=axes[i, j])
    ax.set_title(f"{depth=}, {width=}")
    ax.text(ax.get_xlim()[0]*.95, ax.get_ylim()[1]*.95, f"{thresh=:.2f}\n{valid_acc=:.1f}%\n{test_acc=:.1f}%", va="top")
    
# Assert minimal performance requirements.
# You should be able to do better than these by at least 5%.
test.assertGreaterEqual(np.min(test_accs), 75.0)
test.assertGreaterEqual(np.quantile(test_accs, 0.75), 85.0)
  0%|          | 0/9 [00:00<?, ?it/s]

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [18]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Consider the first binary classifier you trained in this notebook and the loss/accuracy curves we plotted for it on the train and validation sets, as well as the decision boundary plot.

Based on those plots, explain qualitatively whether or now your model has:

  1. High Optimization error?
  2. High Generalization error?
  3. High Approximation error?

Explain your answers for each of the above. Since this is a qualitative question, assume "high" simply means "I would take measures in order to decrease it further".

In [19]:
display_answer(hw2.answers.part3_q1)

Your answer:

  1. We have a relatively low Optimization error. As can be seen from the graphs, when training the model we reach a relatively low training loss and keep it steady throughout the training. At the same time, our accuracy reaches a high percentage rate. From it, we can conclude that the model can fit well to the given training data.

  2. Our Generalization error is higher than the Optimization error, and experience jumps during the training in both the test loss and test accuracy. But what we can see is an overall improvement in the jump as we keep training, but it doesn't go a way, so over all we are having high Generalization error.

  3. If we look at the decision boundary we can see that the model succeeded in drawing an accurate boundary between the different classes and therefore the approximation error is low.

Question 2¶

Consider the first binary classifier you trained in this notebook and the confusion matrices we plotted for it.

For the validation dataset, would you expect the FPR or the FNR to be higher, and why? Recall that you have full knowledge of the data generating process.

In [20]:
display_answer(hw2.answers.part3_q2)

Your answer: 2. Based on the confusion matrix we derived for the training set, we can see that the FNR is greater than the FPR, we could expect the model to do the same on the validation set. But when we look at the decision boundary graph of the validation data set, there are a relatively high number of samples of class 1 in the area of class 0, so the model would wrongly predict those samples as of class 1. So from this point of view, we actually predict that the model will have a bigger FPR than FNR in the validation set, as we got from the confusion matrix.

Question 3¶

You're training a binary classifier screening of a large cohort of patients for some disease, with the aim to detect the disease early, before any symptoms appear. You train the model on easy-to-obtain features, so screening each individual patient is simple and low-cost. In case the model classifies a patient as sick, she must then be sent to furhter testing in order to confirm the illness. Assume that these further tests are expensive and involve high-risk to the patient. Assume also that once diagnosed, a low-cost treatment exists.

You wish to screen as many people as possible at the lowest possible cost and loss of life. Would you still choose the same "optimal" point on the ROC curve as above? If not, how would you choose it? Answer these questions for two possible scenarios:

  1. A person with the disease will develop non-lethal symptoms that immediately confirm the diagnosis and can then be treated.
  2. A person with the disease shows no clear symptoms and may die with high probability if not diagnosed early enough, either by your model or by the expensive test.

Explain your answers.

In [21]:
display_answer(hw2.answers.part3_q3)

Your answer: 3.

  1. For a person with a disease that will develop non-lethal symptoms, a high false positive rate can be riskier (assume that these further tests are expensive and involve high-risk to the patient) than the chances of false detecting the illness of a sick person. Therefore, FPR is expensive than FNR

and we will pick the optimal point on the ROC curve to minimize the FPR. 2. In this scenario, wrongly diagnosing a patient as healthy even though he is sick is more dangerous than the case of getting a false positive. Therefore, FNR is expensive than FPR and we will pick the optimal point on the ROC curve to minimize the FNR.

Question 4¶

Analyze your results from the Architecture Experiment.

  1. Explain the decision boundaries and model performance you obtained for the columns (fixed depth, width varies).
  2. Explain the decision boundaries and model performance you obtained for the rows (fixed width, depth varies).
  3. Compare and explain the results for the following pair of configurations, which have the same number of total parameters:
    • depth=1, width=32 and depth=4, width=8
  4. Explain the effect of threshold selection on the validation set: did it improve the results on the test set? why?
In [22]:
display_answer(hw2.answers.part3_q4)

Your answer: 4.

  1. As we can see, the test and validation accuracy improves as we go down the column of the graph table. By increasing the width of the model it becomes more expressive and by that produces a more "breakable" decision line.
  2. When we travel in the rows, we see that the model is getting improved when we increase the model depth to depth=2. This can be explained because each layer adds another activation function acting on a previously modified feature vector of each given sample, which increases the non-linearity capabilities of the network to draw a curved boundary decision line. When we increased the depth to 4, our model started to overfit the training data, and we have experienced a minor decrease in our test accuracy results.
  3. We can see from our results that a network that consists of one layer with 32 neurons gives better validation accuracy and test accuracy than the network that consists of 4 layers with width of 8. This can be explained by the general guidance principle that wider networks can capture complex patterns in the given data while deeper networks can extract more abstract patterns from the data which is not our main desire from the given task.
  4. The threshhold selection given the validation set did not improve the test accuracy result. Because the samples distribute differently on both sets, picking the right threshold for one dataset doesn't correlate with the improvement of the model on the other set.
$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 4: Convolutional Neural Networks¶

In this part we will explore convolution networks. We'll implement a common block-based deep CNN pattern with an without residual connections.

In [1]:
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Reminder: Convolutional layers and networks¶

Convolutional layers are the most essential building blocks of the state of the art deep learning image classification models and also play an important role in many other tasks. As we saw in the tutorial, when applied to images, convolutional layers operate on and produce volumes (3D tensors) of activations.

A convenient way to interpret convolutional layers for images is as a collection of 3D learnable filters, each of which operates on a small spatial region of the input volume. Each filter is convolved with the input volume ("slides over it"), and a dot product is computed at each location followed by a non-linearity which produces one activation. All these activations produce a 2D plane known as a feature map. Multiple feature maps (one for each filter) comprise the output volume.

A crucial property of convolutional layers is their translation equivariance, i.e. shifting the input results in and equivalently shifted output. This produces the ability to detect features regardless of their spatial location in the input.

Convolutional network architectures usually follow a pattern basic repeating blocks: one or more convolution layers, each followed by a non-linearity (generally ReLU) and then a pooling layer to reduce spatial dimensions. Usually, the number of convolutional filters increases the deeper they are in the network. These layers are meant to extract features from the input. Then, one or more fully-connected layers is used to combine the extracted features into the required number of output class scores.

Building convolutional networks with PyTorch¶

PyTorch provides all the basic building blocks needed for creating a convolutional arcitecture within the torch.nn package. Let's use them to create a basic convolutional network with the following architecture pattern:

[(CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC

Here $N$ is the total number of convolutional layers, $P$ specifies how many convolutions to perform before each pooling layer and $M$ specifies the number of hidden fully-connected layers before the final output layer.

TODO: Complete the implementaion of the CNN class in the hw2/cnn.py module. Use PyTorch's nn.Conv2d and nn.MaxPool2d for the convolution and pooling layers. It's recommended to implement the missing functionality in the order of the class' methods.

In [3]:
from hw2.cnn import CNN

test_params = [
    dict(
        in_size=(3,100,100), out_classes=10,
        channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
        conv_params=dict(kernel_size=3, stride=1, padding=1),
        activation_type='relu', activation_params=dict(),
        pooling_type='max', pooling_params=dict(kernel_size=2),
    ),
    dict(
        in_size=(3,100,100), out_classes=10,
        channels=[32]*4, pool_every=2, hidden_dims=[100]*2,
        conv_params=dict(kernel_size=5, stride=2, padding=3),
        activation_type='lrelu', activation_params=dict(negative_slope=0.05),
        pooling_type='avg', pooling_params=dict(kernel_size=3),
    ),
    dict(
        in_size=(3,100,100), out_classes=3,
        channels=[16]*5, pool_every=3, hidden_dims=[100]*1,
        conv_params=dict(kernel_size=2, stride=2, padding=2),
        activation_type='lrelu', activation_params=dict(negative_slope=0.1),
        pooling_type='max', pooling_params=dict(kernel_size=2),
    ),
]

for i, params in enumerate(test_params):
    torch.manual_seed(seed)
    net = CNN(**params)
    print(f"\n=== test {i=} ===")
    print(net)

    torch.manual_seed(seed)
    test_out = net(torch.ones(1, 3, 100, 100))
    print(f'{test_out=}')

    expected_out = torch.load(f'tests/assets/expected_conv_out_{i:02d}.pt')
    print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
    test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU()
    (2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU()
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU()
    (7): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU()
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (mlp): MLP(
    (model): ModuleList(
      (0): Linear(in_features=20000, out_features=100, bias=True)
      (1): ReLU()
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): ReLU()
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0745, -0.1058,  0.0928,  0.0476,  0.0057,  0.0051,  0.0938, -0.0582,
          0.0573,  0.0583]], grad_fn=<AddmmBackward0>)
max_diff=0.0

=== test i=1 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (1): LeakyReLU(negative_slope=0.05)
    (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (3): LeakyReLU(negative_slope=0.05)
    (4): AvgPool2d(kernel_size=3, stride=3, padding=0)
    (5): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (6): LeakyReLU(negative_slope=0.05)
    (7): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2), padding=(3, 3))
    (8): LeakyReLU(negative_slope=0.05)
    (9): AvgPool2d(kernel_size=3, stride=3, padding=0)
  )
  (mlp): MLP(
    (model): ModuleList(
      (0): Linear(in_features=32, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.05)
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): LeakyReLU(negative_slope=0.05)
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0724, -0.0030,  0.0637, -0.0073,  0.0932, -0.0662, -0.0656,  0.0076,
          0.0193,  0.0241]], grad_fn=<AddmmBackward0>)
max_diff=0.0

=== test i=2 ===
CNN(
  (feature_extractor): Sequential(
    (0): Conv2d(3, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (1): LeakyReLU(negative_slope=0.1)
    (2): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (3): LeakyReLU(negative_slope=0.1)
    (4): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (5): LeakyReLU(negative_slope=0.1)
    (6): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (7): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (8): LeakyReLU(negative_slope=0.1)
    (9): Conv2d(16, 16, kernel_size=(2, 2), stride=(2, 2), padding=(2, 2))
    (10): LeakyReLU(negative_slope=0.1)
  )
  (mlp): MLP(
    (model): ModuleList(
      (0): Linear(in_features=400, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.1)
      (2): Linear(in_features=100, out_features=3, bias=True)
      (3): Identity()
    )
  )
)
test_out=tensor([[-0.0004, -0.0094,  0.0817]], grad_fn=<AddmmBackward0>)
max_diff=0.0

As before, we'll wrap our model with a Classifier that provides the necessary functionality for calculating probability scores and obtaining class label predictions. This time, we'll use a simple approach that simply selects the class with the highest score.

TODO: Implement the ArgMaxClassifier in the hw2/classifier.py module.

In [4]:
from hw2.classifier import ArgMaxClassifier

model = ArgMaxClassifier(model=CNN(**test_params[0]))

test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test.assertEqual(model.classify(test_image).shape, (1,))
test.assertEqual(model.predict_proba(test_image).shape, (1, 10))
test.assertAlmostEqual(torch.sum(model.predict_proba(test_image)).item(), 1.0, delta=1e-3)

Let's now load CIFAR-10 to use as our dataset.

In [5]:
data_dir = os.path.expanduser('~/.pytorch-datasets')
ds_train = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=True, transform=tvtf.ToTensor())
ds_test = torchvision.datasets.CIFAR10(root=data_dir, download=True, train=False, transform=tvtf.ToTensor())

print(f'Train: {len(ds_train)} samples')
print(f'Test: {len(ds_test)} samples')

x0,_ = ds_train[0]
in_size = x0.shape
num_classes = 10
print('input image size =', in_size)
Files already downloaded and verified
Files already downloaded and verified
Train: 50000 samples
Test: 10000 samples
input image size = torch.Size([3, 32, 32])

Now as usual, as a sanity test let's make sure we can overfit a tiny dataset with our model. But first we need to adapt our Trainer for PyTorch models.

TODO:

  1. Complete the implementaion of the ClassifierTrainer class in the hw2/training.py module if you haven't done so already.
  2. Set the optimizer hyperparameters in part4_optim_hp(), respectively, in hw2/answers.py.
In [6]:
from hw2.training import ClassifierTrainer
from hw2.answers import part4_optim_hp

torch.manual_seed(seed)

# Define a tiny part of the CIFAR-10 dataset to overfit it
batch_size = 2
max_batches = 25
dl_train = torch.utils.data.DataLoader(ds_train, batch_size, shuffle=False)

# Create model, loss and optimizer instances
model = ArgMaxClassifier(
    model=CNN(
        in_size, num_classes, channels=[32], pool_every=1, hidden_dims=[100],
        conv_params=dict(kernel_size=3, stride=1, padding=1),
        pooling_params=dict(kernel_size=2),
    )
)

hp_optim = part4_optim_hp()
loss_fn = hp_optim.pop('loss_fn')
optimizer = torch.optim.SGD(params=model.parameters(), **hp_optim)

# Use ClassifierTrainer to run only the training loop a few times.
trainer = ClassifierTrainer(model, loss_fn, optimizer, device)
best_acc = 0
for i in range(25):
    res = trainer.train_epoch(dl_train, max_batches=max_batches, verbose=(i%5==0))
    best_acc = res.accuracy if res.accuracy > best_acc else best_acc
    
# Test overfitting
test.assertGreaterEqual(best_acc, 90)
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]
train_batch:   0%|          | 0/25 [00:00<?, ?it/s]

Residual Networks¶

A very common addition to the basic convolutional architecture described above are shortcut connections. First proposed by He et al. (2016), this simple addition has been shown to be a crucial ingredient in order to achieve effective learning with very deep networks. Virtually all state of the art image classification models from recent years use this technique.

The idea is to add a shortcut, or skip, around every two or more convolutional layers:

On the left we see an example of a regular Residual Block, that takes a 64 channel input, and performs two 3X3 convolutions , which are added to the original input.
On the right we see an exapmle of a Bottleneck Residual Block, that takes a 256 channel input, projects it to a 64 channel tensor with a 1X1 convolution, then performs an inner 3X3 convolution, followd by another 1X1 projection convolution back to the original numer of channels, 256. The output is then added to the original input.

Overall, we can denote the structure of the bottleneck channels in the given example as 256->64->64->256, where the first and last arrows denote the 1X1 convolutions, and the middle arrow is the inner convolution. Note that the 1X1 convolution with the default parameters (in pytorch) is defined such that the only dimension of the tensor that changes is the number of channels.

This adds an easy way for the network to learn identity mappings: set the weight values to be very small. The outcome is that the convolutional layers learn a residual mapping, i.e. some delta that is applied to the identity map, instead of actually learning a completely new mapping from scratch.

Lets start by implementing a general residual block, representing a structure similar to the above diagrams. Our residual block will be composed of:

  • A "main path" with some number of convolutional layers with ReLU between them. Optionally, we'll also apply dropout and batch normalization layers (in this order) between the convolutions, before the ReLU.
  • A "shortcut path" implementing an identity mapping around the main path. In case of a different number of input/output channels, the shortcut path should contain an additional 1x1 convolution to project the channel dimension.
  • The sum of the main and shortcut paths output is passed though a ReLU and returned.

TODO: Complete the implementation of the ResidualBlock's __init__() method in the hw2/cnn.py module.

In [7]:
from hw2.cnn import ResidualBlock

torch.manual_seed(seed)

resblock = ResidualBlock(
    in_channels=3, channels=[6, 4]*2, kernel_sizes=[3, 5]*2,
    batchnorm=True, dropout=0.2
)
print(resblock)

torch.manual_seed(seed)
test_out = resblock(torch.ones(1, 3, 32, 32))
print(f'{test_out.shape=}')

expected_out = torch.load('tests/assets/expected_resblock_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBlock(
  (main_path): Sequential(
    (0): Conv2d(3, 6, kernel_size=(3, 3), stride=(1, 1), padding=same)
    (1): Dropout2d(p=0.2, inplace=False)
    (2): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (3): ReLU()
    (4): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=same)
    (5): Dropout2d(p=0.2, inplace=False)
    (6): BatchNorm2d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (7): ReLU()
    (8): Conv2d(4, 6, kernel_size=(3, 3), stride=(1, 1), padding=same)
    (9): Dropout2d(p=0.2, inplace=False)
    (10): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (11): ReLU()
    (12): Conv2d(6, 4, kernel_size=(5, 5), stride=(1, 1), padding=same)
  )
  (shortcut_path): Sequential(
    (0): Identity()
    (1): Conv2d(3, 4, kernel_size=(1, 1), stride=(1, 1), bias=False)
  )
)
test_out.shape=torch.Size([1, 4, 32, 32])
max_diff=5.960464477539062e-07

Bottleneck Blocks¶

In the ResNet Block diagram shown above, the right block is called a bottleneck block. This type of block is mainly used deep in the network, where the feature space becomes increasingly high-dimensional (i.e. there are many channels).

Instead of applying a KxK conv layer on the original input channels, a bottleneck block first projects to a lower number of features (channels), applies the KxK conv on the result, and then projects back to the original feature space. Both projections are performed with 1x1 convolutions.

TODO: Complete the implementation of the ResidualBottleneckBlock in the hw2/cnn.py module.

In [8]:
from hw2.cnn import ResidualBottleneckBlock

torch.manual_seed(seed)
resblock_bn = ResidualBottleneckBlock(
    in_out_channels=256, inner_channels=[64, 32, 64], inner_kernel_sizes=[3, 5, 3],
    batchnorm=False, dropout=0.1, activation_type="lrelu"
)
print(resblock_bn)

# Test a forward pass
torch.manual_seed(seed)
test_in  = torch.ones(1, 256, 32, 32)
test_out = resblock_bn(test_in)
print(f'{test_out.shape=}')
assert test_out.shape == test_in.shape 

expected_out = torch.load('tests/assets/expected_resblock_bn_out.pt')
print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
ResidualBottleneckBlock(
  (main_path): Sequential(
    (0): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), padding=same)
    (1): Dropout2d(p=0.1, inplace=False)
    (2): LeakyReLU(negative_slope=0.01)
    (3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
    (4): Dropout2d(p=0.1, inplace=False)
    (5): LeakyReLU(negative_slope=0.01)
    (6): Conv2d(64, 32, kernel_size=(5, 5), stride=(1, 1), padding=same)
    (7): Dropout2d(p=0.1, inplace=False)
    (8): LeakyReLU(negative_slope=0.01)
    (9): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
    (10): Dropout2d(p=0.1, inplace=False)
    (11): LeakyReLU(negative_slope=0.01)
    (12): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), padding=same)
  )
  (shortcut_path): Sequential(
    (0): Identity()
  )
)
test_out.shape=torch.Size([1, 256, 32, 32])
max_diff=1.1920928955078125e-07

Now, based on the ResidualBlock, we'll implement our own variation of a residual network (ResNet), with the following architecture:

[-> (CONV -> ACT)*P -> POOL]*(N/P) -> (FC -> ACT)*M -> FC
 \------- SKIP ------/

Note that $N$, $P$ and $M$ are as before, however now $P$ also controls the number of convolutional layers to add a skip-connection to.

TODO: Complete the implementation of the ResNet class in the hw2/cnn.py module. You must use your ResidualBlocks or ResidualBottleneckBlocks to group together every $P$ convolutional layers.

In [9]:
from hw2.cnn import ResNet

test_params = [
    dict(
        in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
        pool_every=4, hidden_dims=[100]*2,
        activation_type='lrelu', activation_params=dict(negative_slope=0.01),
        pooling_type='avg', pooling_params=dict(kernel_size=2),
        batchnorm=True, dropout=0.1,
        bottleneck=False
    ),
    dict(
        # create 64->16->64 bottlenecks
        in_size=(3,100,100), out_classes=5, channels=[64, 16, 64]*4,
        pool_every=3, hidden_dims=[64]*1,
        activation_type='tanh',
        pooling_type='max', pooling_params=dict(kernel_size=2),
        batchnorm=True, dropout=0.1,
        bottleneck=True
    )
]

for i, params in enumerate(test_params):
    torch.manual_seed(seed)
    net = ResNet(**params)
    print(f"\n=== test {i=} ===")
    print(net)

    torch.manual_seed(seed)
    test_out = net(torch.ones(1, 3, 100, 100))
    print(f'{test_out=}')
    
    expected_out = torch.load(f'tests/assets/expected_resnet_out_{i:02d}.pt')
    print(f'max_diff={torch.max(torch.abs(test_out - expected_out)).item()}')
    test.assertTrue(torch.allclose(test_out, expected_out, atol=1e-3))
=== test i=0 ===
ResNet(
  (feature_extractor): Sequential(
    (0): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): LeakyReLU(negative_slope=0.01)
        (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): LeakyReLU(negative_slope=0.01)
        (8): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (9): Dropout2d(p=0.1, inplace=False)
        (10): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (11): LeakyReLU(negative_slope=0.01)
        (12): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
        (1): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (1): AvgPool2d(kernel_size=2, stride=2, padding=0)
    (2): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): LeakyReLU(negative_slope=0.01)
        (4): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
  )
  (mlp): MLP(
    (model): ModuleList(
      (0): Linear(in_features=160000, out_features=100, bias=True)
      (1): LeakyReLU(negative_slope=0.01)
      (2): Linear(in_features=100, out_features=100, bias=True)
      (3): LeakyReLU(negative_slope=0.01)
      (4): Linear(in_features=100, out_features=10, bias=True)
      (5): Identity()
    )
  )
)
test_out=tensor([[ 0.0422,  0.0332,  0.1870, -0.0532, -0.0742,  0.1143, -0.0617, -0.0467,
          0.0852,  0.0221]], grad_fn=<AddmmBackward0>)
max_diff=1.1920928955078125e-07

=== test i=1 ===
ResNet(
  (feature_extractor): Sequential(
    (0): ResidualBlock(
      (main_path): Sequential(
        (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(64, 16, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(3, 3), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
        (1): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      )
    )
    (1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (2): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (4): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (6): ResidualBottleneckBlock(
      (main_path): Sequential(
        (0): Conv2d(64, 16, kernel_size=(1, 1), stride=(1, 1), padding=same)
        (1): Dropout2d(p=0.1, inplace=False)
        (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (3): Tanh()
        (4): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=same)
        (5): Dropout2d(p=0.1, inplace=False)
        (6): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (7): Tanh()
        (8): Conv2d(16, 64, kernel_size=(1, 1), stride=(1, 1), padding=same)
      )
      (shortcut_path): Sequential(
        (0): Identity()
      )
    )
    (7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (mlp): MLP(
    (model): ModuleList(
      (0): Linear(in_features=2304, out_features=64, bias=True)
      (1): Tanh()
      (2): Linear(in_features=64, out_features=5, bias=True)
      (3): Identity()
    )
  )
)
test_out=tensor([[ 0.0237, -0.1945, -0.0085, -0.4024, -0.2667]],
       grad_fn=<AddmmBackward0>)
max_diff=2.3096799850463867e-07

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [10]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Consider the bottleneck block from the right side of the ResNet diagram above. Compare it to a regular block that performs a two 3x3 convs directly on the 256-channel input (i.e. as shown in the left side of the diagram, with a different number of channels). Explain the differences between the regular block and the bottleneck block in terms of:

  1. Number of parameters. Calculate the exact numbers for these two examples.
  2. Number of floating point operations required to compute an output (qualitative assessment).
  3. Ability to combine the input: (1) spatially (within feature maps); (2) across feature maps.
In [11]:
display_answer(hw2.answers.part4_q1)

Your answer:

  1. for the first conv we get $\displaystyle 256\cdot ( 256\cdot 3\cdot 3+1) =590080$ parameters (including biases) and a second conv that gives us another 590080. So summing up the number of parameters of each conv layer, and we get 1180160 parameters. For the bottleneck block, we have a total number of:

\begin{equation*} 64\cdot ( 1\cdot 1\cdot 256+1) +64\cdot ( 3\cdot 3\cdot 64+1) +256\cdot ( 1\cdot 1\cdot 64+1) =70016 \end{equation*}

parameters.

  1. The equation to calculate the number of flops produced by a given conv layer is
\begin{equation*} \ FLOPs\ =\ (( K_{h} \cdot K_{w}) \cdot C_{in} +1) \cdot (( H_{out} \cdot W_{out}) \cdot C_{out}) \end{equation*}

because we are having smaller kernel sizes of conv layers in the bottleneck block, we are performing higher number of FLOPs in the regular block than the bottleneck residual block.

  1. The bottleneck residual block has 2 1x1 conv layers used to reduce the spatial dimensions of the feature map and restore the number of channels at the end of the block, while the regular block preserves the spatial dimensions of the feature map. That is why bottleneck blocks are helpful in doing computationally expensive 3x3 conv. Bottleneck residual blocks can potentially have different output feature map dimensions compared to the input feature map dimensions.

  2. A bottleneck residual block in a ResNet architecture is designed to capture and represent more features compared to a regular block.

$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 5: Convolutional Architecture Experiments¶

In this part we will explore convolution networks and the effects of their architecture on accuracy. We'll use our deep CNN implementation and perform various experiments on it while varying the architecture. Then we'll implement our own custom architecture to see whether we can get high classification results on a large subset of CIFAR-10.

Training will be performed on GPU.

In [1]:
import os
import re
import sys
import glob
import numpy as np
import matplotlib.pyplot as plt
import unittest
import torch
import torchvision
import torchvision.transforms as tvtf

%matplotlib inline
%load_ext autoreload
%autoreload 2
In [2]:
seed = 42
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
plt.rcParams.update({'font.size': 12})
test = unittest.TestCase()

Experimenting with model architectures¶

We will now perform a series of experiments that train various model configurations on a part of the CIFAR-10 dataset.

To perform the experiments, you'll need to use a machine with a GPU since training time might be too long otherwise.

Note about running on GPUs¶

Here's an example of running a forward pass on the GPU (assuming you're running this notebook on a GPU-enabled machine).

In [3]:
from hw2.cnn import ResNet

net = ResNet(
    in_size=(3,100,100), out_classes=10, channels=[32, 64]*3,
    pool_every=4, hidden_dims=[100]*2,
    pooling_type='avg', pooling_params=dict(kernel_size=2),
)
net = net.to(device)

test_image = torch.randint(low=0, high=256, size=(3, 100, 100), dtype=torch.float).unsqueeze(0)
test_image = test_image.to(device)

test_out = net(test_image)

Notice how we called .to(device) on both the model and the input tensor. Here the device is a torch.device object that we created above. If an nvidia GPU is available on the machine you're running this on, the device will be 'cuda'. When you run .to(device) on a model, it recursively goes over all the model parameter tensors and copies their memory to the GPU. Similarly, calling .to(device) on the input image also copies it.

In order to train on a GPU, you need to make sure to move all your tensors to it. You'll get errors if you try to mix CPU and GPU tensors in a computation.

In [4]:
print(f'This notebook is running with device={device}')
print(f'The model parameter tensors are also on device={next(net.parameters()).device}')
print(f'The test image is also on device={test_image.device}')
print(f'The output is therefore also on device={test_out.device}')
This notebook is running with device=cuda
The model parameter tensors are also on device=cuda:0
The test image is also on device=cuda:0
The output is therefore also on device=cuda:0

Notes on using course servers¶

First, please read the course servers guide carefully.

To run the experiments on the course servers, you can use the py-sbatch.sh script directly to perform a single experiment run in batch mode (since it runs python once), or use the srun command to do a single run in interactive mode. For example, running a single run of experiment 1 interactively (after conda activate of course):

srun -c 2 --gres=gpu:1 --pty python -m hw2.experiments run-exp -n test -K 32 64 -L 2 -P 2 -H 100

To perform multiple runs in batch mode with sbatch (e.g. for running all the configurations of an experiments), you can create your own script based on py-sbatch.sh and invoke whatever commands you need within it.

Don't request more than 2 CPU cores and 1 GPU device for your runs. The code won't be able to utilize more than that anyway, so you'll see no performance gain if you do. It will only cause delays for other students using the servers.

General notes for running experiments¶

  • You can run the experiments on a different machine (e.g. the course servers) and copy the results (files) to the results folder on your local machine. This notebook will only display the results, not run the actual experiment code (except for a demo run).
  • It's important to give each experiment run a name as specified by the notebook instructions later on. Each run has a run_name parameter that will also be the base name of the results file which this notebook will expect to load.
  • You will implement the code to run the experiments in the hw2/experiments.py module. This module has a CLI parser so that you can invoke it as a script and pass in all the configuration parameters for a single experiment run.
  • You should use python -m hw2.experiments run-exp to run an experiment, and not python hw2/experiments.py run-exp, regardless of how/where you run it.

Experiment 1: Network depth and number of filters¶

In this part we will test some different architecture configurations based on our CNN and ResNet. Specifically, we want to try different depths and number of features to see the effects these parameters have on the model's performance.

To do this, we'll define two extra hyperparameters for our model, K (filters_per_layer) and L (layers_per_block).

  • K is a list, containing the number of filters we want to have in our conv layers.
  • L is the number of consecutive layers with the same number of filters to use.

For example, if K=[32, 64] and L=2 it means we want two conv layers with 32 filters followed by two conv layers with 64 filters. If we also use pool_every=3, the feature-extraction part of our model will be:

Conv(X,32)->ReLu->Conv(32,32)->ReLU->Conv(32,64)->ReLU->MaxPool->Conv(64,64)->ReLU

We'll try various values of the K and L parameters in combination and see how each architecture trains. All other hyperparameters are up to you, including the choice of the optimization algorithm, the learning rate, regularization and architecture hyperparams such as pool_every and hidden_dims. Note that you should select the pool_every parameter wisely per experiment so that you don't end up with zero-width feature maps.

You can try some short manual runs to determine some good values for the hyperparameters or implement cross-validation to do it. However, the dataset size you test on should be large. If you limit the number of batches, make sure to use at least 30000 training images and 5000 validation images.

The important thing is that you state what you used, how you decided on it, and explain your results based on that.

First we need to write some code to run the experiment.

TODO:

  1. Implement the cnn_experiment() function in the hw2/experiments.py module.
  2. If you haven't done so already, it would be an excellent idea to implement the early stopping feature of the Trainer class.

The following block tests that your implementation works. It's also meant to show you that each experiment run creates a result file containing the parameters to reproduce and the FitResult object for plotting.

In [5]:
from hw2.experiments import load_experiment, cnn_experiment
from cs236781.plot import plot_fit

# Test experiment1 implementation on a few data samples and with a small model
cnn_experiment(
    'test_run', seed=seed, bs_train=50, batches=10, epochs=10, early_stopping=5,
    filters_per_layer=[32,64], layers_per_block=1, pool_every=1, hidden_dims=[100],
    model_type='resnet',
)

# There should now be a file 'test_run.json' in your `results/` folder.
# We can use it to load the results of the experiment.
cfg, fit_res = load_experiment('results/test_run_L1_K32-64.json')
_, _ = plot_fit(fit_res, train_test_overlay=True)

# And `cfg` contains the exact parameters to reproduce it
print('experiment config: ', cfg)
Files already downloaded and verified
Files already downloaded and verified
--- EPOCH 1/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 2/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 3/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 4/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 5/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 6/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 7/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
--- EPOCH 8/10 ---
train_batch:   0%|          | 0/1000 [00:00<?, ?it/s]
test_batch:   0%|          | 0/834 [00:00<?, ?it/s]
*** Output file ./results/test_run_L1_K32-64.json written
experiment config:  {'run_name': 'test_run', 'out_dir': './results', 'seed': 42, 'device': None, 'bs_train': 50, 'bs_test': 12, 'batches': 10, 'epochs': 10, 'early_stopping': 5, 'checkpoints': None, 'lr': 0.003, 'reg': 0.001, 'filters_per_layer': [32, 64], 'pool_every': 1, 'hidden_dims': [100], 'model_type': 'resnet', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}, 'layers_per_block': 1}

We'll use the following function to load multiple experiment results and plot them together.

In [6]:
def plot_exp_results(filename_pattern, results_dir='results'):
    fig = None
    result_files = glob.glob(os.path.join(results_dir, filename_pattern))
    result_files.sort()
    if len(result_files) == 0:
        print(f'No results found for pattern {filename_pattern}.', file=sys.stderr)
        return
    for filepath in result_files:
        m = re.match('exp\d_(\d_)?(.*)\.json', os.path.basename(filepath))
        cfg, fit_res = load_experiment(filepath)
        fig, axes = plot_fit(fit_res, fig, legend=m[2],log_loss=True)
    del cfg['filters_per_layer']
    del cfg['layers_per_block']
    print('common config: ', cfg)

Experiment 1.1: Varying the network depth (L)¶

First, we'll test the effect of the network depth on training.

Configuratons:

  • K=32 fixed, with L=2,4,8,16 varying per run
  • K=64 fixed, with L=2,4,8,16 varying per run

So 8 different runs in total.

Naming runs: Each run should be named exp1_1_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_1_L2_K32.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [7]:
plot_exp_results('exp1_1_L*_K32*.json')
common config:  {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 1500872280, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}
In [8]:
plot_exp_results('exp1_1_L*_K64*.json')
common config:  {'run_name': 'exp1_1', 'out_dir': './results', 'seed': 516412391, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}

Experiment 1.2: Varying the number of filters per layer (K)¶

Now we'll test the effect of the number of convolutional filters in each layer.

Configuratons:

  • L=2 fixed, with K=[32],[64],[128] varying per run.
  • L=4 fixed, with K=[32],[64],[128] varying per run.
  • L=8 fixed, with K=[32],[64],[128] varying per run.

So 9 different runs in total. To clarify, each run K takes the value of a list with a single element.

Naming runs: Each run should be named exp1_2_L{}_K{} where the braces are placeholders for the values. For example, the first run should be named exp1_2_L2_K32.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [9]:
plot_exp_results('exp1_2_L2*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 621669896, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}
In [10]:
plot_exp_results('exp1_2_L4*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 847860898, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}
In [11]:
plot_exp_results('exp1_2_L8*.json')
common config:  {'run_name': 'exp1_2', 'out_dir': './results', 'seed': 209977046, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}

Experiment 1.3: Varying both the number of filters (K) and network depth (L)¶

Now we'll test the effect of the number of convolutional filters in each layer.

Configuratons:

  • K=[64, 128] fixed with L=2,3,4 varying per run.

So 3 different runs in total. To clarify, each run K takes the value of an array with a two elements.

Naming runs: Each run should be named exp1_3_L{}_K{}-{} where the braces are placeholders for the values. For example, the first run should be named exp1_3_L2_K64-128.

TODO: Run the experiment on the above configuration with the CNN model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [12]:
plot_exp_results('exp1_3*.json')
common config:  {'run_name': 'exp1_3', 'out_dir': './results', 'seed': 225323461, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 3, 'hidden_dims': [500, 500], 'model_type': 'cnn', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}

Experiment 1.4: Adding depth with Residual Networks¶

Now we'll test the effect of skip connections on the training and performance.

Configuratons:

  • K=[32] fixed with L=8,16,32 varying per run.
  • K=[64, 128, 256] fixed with L=2,4,8 varying per run.

So 6 different runs in total.

Naming runs: Each run should be named exp1_4_L{}_K{}-{}-{} where the braces are placeholders for the values.

TODO: Run the experiment on the above configuration with the ResNet model. Make sure the result file names are as expected. Use the following blocks to display the results.

In [13]:
plot_exp_results('exp1_4_L*_K32.json')
common config:  {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 1703498130, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 6, 'hidden_dims': [500, 500], 'model_type': 'resnet', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}
In [14]:
plot_exp_results('exp1_4_L*_K64*.json')
common config:  {'run_name': 'exp1_4', 'out_dir': './results', 'seed': 908140133, 'device': None, 'bs_train': 128, 'bs_test': 32, 'batches': 100, 'epochs': 100, 'early_stopping': 10, 'checkpoints': None, 'lr': 0.001, 'reg': 0.001, 'pool_every': 7, 'hidden_dims': [500, 500], 'model_type': 'resnet', 'activation_type': 'relu', 'activation_params': {}, 'pooling_type': 'max', 'pooling_params': {'kernel_size': 2}, 'batchnorm': True, 'bottleneck': False, 'dropout': 0.1, 'kw': {}}

Questions¶

TODO Answer the following questions. Write your answers in the appropriate variables in the module hw2/answers.py.

In [15]:
from cs236781.answers import display_answer
import hw2.answers

Question 1¶

Analyze your results from experiment 1.1. In particular,

  1. Explain the effect of depth on the accuracy. What depth produces the best results and why do you think that's the case?
  2. Were there values of L for which the network wasn't trainable? what causes this? Suggest two things which may be done to resolve it at least partially.
In [16]:
display_answer(hw2.answers.part5_q1)

Your answer:

  1. When we add more depth to the CNN we are increasing the capacity of the forward chain to capture complex patterns in the data and the ability of the network to derive an abstract representation of the input data. We can see from our results that 4 layers got us the best test accuracy, but at the cost of fitting the test data too much. One of the reasons for not keeping this up with more depth is the amount of training time for a CNN with large amount of layers to get the network parameters to sufficient local minima in higher dimensionality space. The thing is that event though we didn't get the same train loss at depth of 16 as the other depths, our test lost was even better, suggesting we had better generalization capacity with a network of large depth.
  2. Yes, it happened for L=16 with K=64, and it might be the problem of vanishing gradients which can occur when the derivatives calculated during the backpropagation phase are smaller than one and so multiplying them by the chain rule technique gives tiny gradients which make the learning step of updating the model parameters identical to the previous iteration and thus keeps the network in the same state. This can be solved by using different architectures like the Resnet to introduce residual connections which allow the gradients to pass multiple layers by identity mappings or use batch normalization which normalize each layer input by recentering and rescaling it.

Question 2¶

Analyze your results from experiment 1.2. In particular, compare to the results of experiment 1.1.

In [17]:
display_answer(hw2.answers.part5_q2)

Your answer: 2. At L=2 We have CNN with K=128 which performs badly on train loss and train/test accuracy while L=2 with K=32 and K=64 gives the same results. When we increase the depth to 4, all the CNNs start to overfit the training set, as all CNN reaches train accuracy of roughly 100% while getting test accuracy result below 70%. At L=8 we see improvements in our steadiness of our test accuracy results, while the result are similar to the previous L=4 experiment. On both L=4 and L=8 K=32 gives poor test accuracy results. We can learn from it that 64 filters for us is a sweet spot for both large depth and small depth for our learning task. We want a high number of filters per layer because it allows us to more diverse range of visual patterns, potentially improving its ability to detect and represent various features, but we have to make sure as experiment 1.1 showed that we also need to increase the depth to make sure our network can make higher abstraction from all the extracted features and generalize better.

Question 3¶

Analyze your results from experiment 1.3.

In [18]:
display_answer(hw2.answers.part5_q3)

Your answer: 3. experiment 1.3 shows us that all the models get roughly the same training accuracy on all different depth. While depth of 2 converges quicker than the other 2 depths types. The explanation for that is the reduction of the number of multiplication the trainer required to do to get the gradients, and by that making them bigger in each step. With the change of adding different filter channels per block, we increase the diversity of each filter activation layers of the feature map. So our CNN improved even further, especially with CNN with depth of 3.

Question 4¶

Analyze your results from experiment 1.4. Compare to experiment 1.1 and 1.3.

In [19]:
display_answer(hw2.answers.part5_q4)

Your answer: 4. When we keep the same number of filters per conv layer, the model which succeeds the best in generalization is a CNN of depth of 32 with K=32. The interesting part is that the model train loss score is lower than the score of the other models and the slope absolute value is bigger for the others for obvious reasons. This can be reflected on the train accuracy as well, but when we look at the test loss scores, the L=32 k=32 model performs dramatically better than the other model and produces accuracy of 75%-80% on the test set while as we decreased the number of depth the models produce worse accuracies on the test set. When we look at the other experiment, we have very similar behavior. Network with greater depth tends to perform better on test set both accuracy and test loss but instead of improvement on test scores relative to fixed K parameter the different channel filters are lowering the score. Happen not because of the increase of channels of feature maps but instead by the decrease of depth, so the general rule we can learn from it that to better use the generated feature maps of each residual block we have to use more depth to abstract the features better. Compared to experiment 1.3 and 1.1 residual blocks help us in both vanishing gradients problems (because of the shortcut connections) but even help to keep early learned features close to the mlp part of the network in the forward sense.

In [20]:
display_answer(hw2.answers.part5_q5)

Your answer: 4. When we keep the same number of filters per conv layer, the model which succeeds the best in generalization is a CNN of depth of 32 with K=32. The interesting part is that the model train loss score is lower than the score of the other models and the slope absolute value is bigger for the others for obvious reasons. This can be reflected on the train accuracy as well, but when we look at the test loss scores, the L=32 k=32 model performs dramatically better than the other model and produces accuracy of 75%-80% on the test set while as we decreased the number of depth the models produce worse accuracies on the test set. When we look at the other experiment, we have very similar behavior. Network with greater depth tends to perform better on test set both accuracy and test loss but instead of improvement on test scores relative to fixed K parameter the different channel filters are lowering the score. Happen not because of the increase of channels of feature maps but instead by the decrease of depth, so the general rule we can learn from it that to better use the generated feature maps of each residual block we have to use more depth to abstract the features better. Compared to experiment 1.3 and 1.1 residual blocks help us in both vanishing gradients problems (because of the shortcut connections) but even help to keep early learned features close to the mlp part of the network in the forward sense.

$$ \newcommand{\mat}[1]{\boldsymbol {#1}} \newcommand{\mattr}[1]{\boldsymbol {#1}^\top} \newcommand{\matinv}[1]{\boldsymbol {#1}^{-1}} \newcommand{\vec}[1]{\boldsymbol {#1}} \newcommand{\vectr}[1]{\boldsymbol {#1}^\top} \newcommand{\rvar}[1]{\mathrm {#1}} \newcommand{\rvec}[1]{\boldsymbol{\mathrm{#1}}} \newcommand{\diag}{\mathop{\mathrm {diag}}} \newcommand{\set}[1]{\mathbb {#1}} \newcommand{\norm}[1]{\left\lVert#1\right\rVert} \newcommand{\pderiv}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\bb}[1]{\boldsymbol{#1}} $$

Part 6: YOLO - Objects Detection¶

In this part we will use an object detection architecture called YOLO (You only look once) to detect objects in images. We'll use an already trained model weights (v5) found here: https://github.com/ultralytics/yolov5

In [1]:
import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

# Load the YOLO model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")
model.to(device)
# Images
img1 = 'imgs/DolphinsInTheSky.jpg'  
img2 = 'imgs/cat-shiba-inu-2.jpg' 
Using cache found in /home/dansdeor/.cache/torch/hub/ultralytics_yolov5_master
requirements: Ultralytics requirement "gitpython>=3.1.30" not found, attempting AutoUpdate...
requirements: ❌ AutoUpdate skipped (offline)
YOLOv5 🚀 2023-6-3 Python-3.8.12 torch-1.10.1 CUDA:0 (NVIDIA GeForce RTX 2080 Ti, 11019MiB)

Fusing layers... 
YOLOv5s summary: 213 layers, 7225885 parameters, 0 gradients
Adding AutoShape... 

Inference with YOLO¶

You are provided with 2 images (img1 and img2). TODO:

  1. Detect objects using the YOLOv5 model for these 2 images.

  2. Print the inference output with bounding boxes.

  3. Calculate the number of pixels within a bounding box and the number in the background.

    Hint: Given you stored the model output in a varibale named 'results', you may find 'results.pandas().xyxy' helpful

  4. Look at the inference results and answer the question below.

In [2]:
#Insert the inference code here.
import cv2
import pandas
import numpy as np
from cs236781.answers import display_answer
import hw2.answers

def print_inbox_background_pixels(data_frame: pandas.core.frame.DataFrame, image_shape):
    bit_map = np.ones((image_shape[1],image_shape[0]))
    print("Image total pixel count: ",image_shape[1] * image_shape[0])
    for _,f in data_frame.iterrows():
        bounding_box_pixel_count = (int(f['xmax'])-int(f['xmin']))*(int(f['ymax'])-int(f['ymin']))
        print("Object recognized as {} have number of {} in its bounding box".format(f['name'],bounding_box_pixel_count))
        # counting each pixel only once
        bit_map[int(f['xmin']):int(f['xmax']),int(f['ymin']):int(f['ymax'])] = 0
    print("The total pixel count in all bounding boxes is: ",np.count_nonzero(bit_map==0))
    print("The pixel count in the background is: ",np.count_nonzero(bit_map==1))

with torch.no_grad():
    for img in [img1,img2]:
        output = model(img)
        output.show()
        data_frame=output.pandas().xyxy[0]
        print_inbox_background_pixels(data_frame,cv2.imread(img).shape)
Image total pixel count:  50325
Object recognized as person have number of 6177 in its bounding box
Object recognized as person have number of 7632 in its bounding box
Object recognized as surfboard have number of 1404 in its bounding box
The total pixel count in all bounding boxes is:  13173
The pixel count in the background is:  37152
Image total pixel count:  562500
Object recognized as cat have number of 166152 in its bounding box
Object recognized as dog have number of 166797 in its bounding box
Object recognized as cat have number of 166140 in its bounding box
The total pixel count in all bounding boxes is:  406433
The pixel count in the background is:  156067

Question 1¶

Analyze the inference results of the 2 images.

  1. How well did the model detect the objects in the pictures?
  2. What can possibly be the reason for the model failures? suggest methods to resolve that issue.
In [3]:
display_answer(hw2.answers.part6_q1)

Your answer:

  1. The model didn't detect the objects in the pictures very well. For the first image, the model detected two dolphins as a person and one dolphin as a surfboard. For the second image, two of the three dogs were recognized as cats, and the cat in the picture did not get recognized at all.

Even though the model is not good at classification, the regression capabilities of the model on the given 2 pictures is better in a way for most objects it successfully matches a bounding box for their size in the pictures.

  1. \ Yolov5s is a smaller and lighter version of the YOLO model compared to its larger counterparts. With a smaller model size, Yolov5s may have limited capacity to capture complex patterns and details in the input data. The other possible reason is that the model training set was not diverse. The performance of any object detection model, including Yolov5s, heavily relies on the quality and diversity of the training dataset.

Creative Detection Failures¶

Object detection pitfalls could be, for example: occlusion - when the objects are partially occlude, and thus missing important features, model bias - when a model learn some bias about an object, it could recognize it as something else in a different setup, and many others like Deformation, Illumination conditions, Cluttered or textured background and blurring due to moving objects.

TODO: Take pictures and that demonstrates 3 of the above object detection pitfalls, run inference and analyze the results.

In [4]:
# Insert the inference code here.
with torch.no_grad():
    for img in ["imgs/cat.jpg", "imgs/man.jpg", "imgs/crowd.jpg"]:
        output = model(img)
        output.show()
        data_frame = output.pandas().xyxy[0]
        print_inbox_background_pixels(data_frame, cv2.imread(img).shape)
Image total pixel count:  101010
Object recognized as sports ball have number of 780 in its bounding box
Object recognized as sports ball have number of 780 in its bounding box
The total pixel count in all bounding boxes is:  1560
The pixel count in the background is:  99450
Image total pixel count:  411540
The total pixel count in all bounding boxes is:  0
The pixel count in the background is:  411540
Image total pixel count:  217000
Object recognized as baseball bat have number of 6860 in its bounding box
The total pixel count in all bounding boxes is:  6860
The pixel count in the background is:  210140

Question 3¶

Analyize the results of the inference.

  1. How well did the model detect the objects in the pictures? explain.
In [5]:
display_answer(hw2.answers.part6_q3)

Your answer: 3. We picked 3 different kinds of setups for object pitfalls: a blurred picture of a cat, a low light image of a person and a picture of a cluttered crowd of people. For the blurred cat picture, the model detected the cat's eyes as sports ball, which can seem like a tennis balls. The model didn't succeed in classifying the cat in the picture or even detect the object. For the dark image of the man, we had no success in detection of the object and for the picture of the crowd, the model detected a blurred line in the picture and classified it as a baseball bat, ignoring all the people in the picture. For the blurring effect, the reason for the bad performance is that the blurring deforms important features from the picture necessary for the model to figure out that there is a cat while making some cat's features like the eye similar to features of another class like a sport ball. The light illumination hides other features relevant for person localization and classification, like the shape of the head, leaving a small subset of features for the model to classify. The cluttering presents numerous distracting or similar-looking objects that add a lot of noise (especially by wearing hats and all kinds of accessories) and make the model struggle to differentiate between all the target objects.